From: allin
Google, founded by Larry Page and Sergey Brin in 1998 [00:01:43] (its website Google.com was registered on September 15th, 1997 [00:00:03]), has seen its co-founder Sergey Brin return to actively help with the company’s efforts in artificial intelligence (AI) [00:00:20], [00:01:48]. Brin views his involvement as fortunate, working on something he feels “really matter[s]” [00:00:24].
Sergey Brin’s Perspective on AI’s Evolution
Brin is currently spending “pretty much every day” on AI work [00:02:20]. He describes the recent AI progress as the most exciting development he’s witnessed as a computer scientist [00:02:32]. He recalls a time in the 1990s when AI was merely a “footnote” in academic curricula, with approaches like neural networks largely “discarded” [00:02:47]. The current miraculous progress, he notes, stemmed from increased computational power, more data, and “a few clever algorithms” [00:03:05]. Brin is particularly impressed by the “amazing capability” that emerges almost monthly from AI tools [00:03:39].
AI’s Broad Impact Beyond Search
While large language models (LLMs) and conversational AI tools have been framed by some industry analysts as a “Potential Threat” to Google Search [00:01:59], Brin believes AI touches “so many different elements of day-to-day life,” with search being “one of them,” but it “kind of covers everything” [00:04:12].
AI in Programming
One significant area AI impacts, according to Brin, is programming itself. He finds writing code from scratch “feels really hard compared to just asking the AI to do it” [00:04:28]. He shared an anecdote where he had an AI model write code to generate Sudoku puzzles, feed them to the AI, and score its performance, completing the task in half an hour [00:05:00], surprising even Google engineers who weren’t fully utilizing AI for their own coding [00:05:22]. He also noted that his kid can program “really complicated things” by just asking the AI to use complex APIs that would typically take a month to learn [00:16:57].
The “God Model” Debate: General Purpose vs. Specific Models
Brin discussed the ongoing debate about AI model evolution: whether the world is moving towards “ginormous general purpose LLMs” (sometimes referred to as a “God model” or Artificial General Intelligence - AGI) or if the future lies in “lots of smaller models that do application specific things,” potentially working together in agent systems [00:05:47].
Historically, different AI techniques were used for distinct problems, such as chess-playing AI versus image generation, or Google’s graph neural network outperforming physics forecasting models [00:06:28]. Even in recent triumphs, like Google’s AI achieving a silver medal (one point from gold) in the International Math Olympiad, it relied on three different AI models: a formal theorem proving model, a geometry-specific AI, and a general-purpose language model [00:07:04]. However, Brin states that learnings from specialized models are being infused into general language models, and he sees a trend toward “a more unified model,” perhaps not a “god model,” but “certainly sort of shared architectures and and ultimately even shared models” [00:07:40].
Compute Demand and Supply in AI
Training and developing large, unified models require “a lot of compute” [00:08:12]. While Brin is skeptical of “blindly” extrapolating three orders of magnitude growth in compute demand, he acknowledges that algorithmic improvements are “outpacing the increased compute” [00:08:36]. Nonetheless, Google is building out compute as quickly as possible due to “huge amount of demand” from cloud customers for TPUs and GPUs [00:09:28]. They even “have to turn down customers” because they lack sufficient compute [00:09:41], and they also use it internally to train and serve their own models [00:09:47]. He concludes that there are “very good reasons that companies are currently building out compute at a fast pace” [00:09:54], and there seems to be “no limit” to enterprise demand for AI inference and new applications [00:10:10].
Surprising Successes and Challenges in AI Applications
Brin highlighted successes in biology, citing AlphaFold and its variants as widely used tools by biologists [00:10:54], noting that these different AI types tend to “converge” [00:11:12].
In robotics, he finds the progress “amazing” — that general-purpose language models or fine-tuning can make robots perform complex tasks [00:11:18]. However, he notes that robotics is “not for the most part yet at the level of robustness that would make it like day-to-day useful” [00:11:36], although he “see[s] a line of sight to it” [00:11:42]. He regretted Google’s past ventures into robotics (e.g., Boston Dynamics), admitting they were “a little too early” [00:12:05], operating on a “treadmill that wasn’t going to get anywhere without the modern AI technology” [00:12:47].
Google’s Cultural Shift: Embracing Risk in AI Deployment
A key aspect of Google’s AI strategy has been a shift from conservatism. Brin acknowledges that Google was “too timid” to deploy its internally developed Transformer models initially [00:16:21]. Reasons for this hesitation included concerns about mistakes, embarrassing outputs, or AI appearing “dumb” [00:16:26].
However, Brin has pushed for a more aggressive deployment strategy. He supported pushing AI code-writing capabilities into Gemini despite engineers’ desire to perfect it, emphasizing that Google’s “conservatism…can’t rule the day today” [00:15:11]. He believes that the “magical” and “incredibly powerful” capabilities of AI warrant taking “some embarrassments” and “some risks” [00:17:14]. His stance is that if AI is “something magical we’re giving the world” and users are properly communicated that it “will periodically get stuff really wrong,” then “we should put it out there and let people experiment” [00:17:48]. He views this as a technology that should not be “keep[t] close to the chest and hidden until it’s like perfect” [00:18:14].
The Nature of AI Competition
Addressing the perception of an AI search wars between Google and Microsoft | AI race between tech giants like Google, Meta, and Amazon, Brin acknowledges that competition is “very helpful” [00:18:54]. He notes Google’s recent achievement of being “number one” on some benchmarks for a couple of weeks [00:19:00], and expresses satisfaction with Google’s progress since the launch of ChatGPT [00:19:24].
Despite the competitive landscape, Brin emphasizes that there’s “tremendous value to humanity” to be created [00:19:50]. He draws a parallel to the early internet, which drastically improved access to information and communication [00:19:57]. Similarly, he sees the “new AI” as “another big capability” that “pretty much everybody in the world can get access to in one form or another” [00:20:29], finding this prospect “super exciting” and “awesome” [00:20:34].