From: redpointai
Eric Ries, author of The Lean Startup, discusses the application of Lean Startup principles in the context of AI startups, observing both similarities and unique challenges within the burgeoning AI industry [00:00:14].
Current State of AI Startups vs. Lean Startup Ideals
Ries notes a divergence from typical Lean Startup practices in the current AI landscape, particularly with large funding rounds, significant spending on models and compute, and a lack of early market interaction [00:00:57]. He attributes this partly to the “magical” nature of AI demos, which can easily convince founders they are the exception and don’t need customer testing [00:01:36].
The Enduring Core Principle
Despite these trends, Ries asserts that the fundamental principles remain true: it’s impossible to know in advance what customers will want [00:01:55]. He emphasizes the need to experiment and discover customer needs through their “revealed actions” [00:02:12]. As Peter Drucker stated, “a business is an entity that exists to create a customer,” and AI agents are not the customers—human beings are [00:04:15].
Critique of “SAS-ification” of AI
A significant challenge identified is the tendency to simply copy-paste the SaaS stack onto AI, assuming everything will be the same [00:02:23]. Many AI companies creating APIs push the product-market fit question to a different layer of the stack, sometimes two, three, or even four layers deep between the model and the end product [00:02:45]. If the AI stack is fundamentally different, this approach could lead to “carnage” in applications [00:03:11]. Ries advocates for understanding the “end, end, end customer” regardless of one’s position in the stack to ensure product-market fit [00:03:20].
AI Economics and Physical Manufacturing Parallels
The economics of AI are “completely different” from traditional software, drawing more parallels to physical manufacturing, deep-sea oil drilling, or nuclear power plants [00:03:41]. These industries involve significant infrastructure, operating costs, and market risk, which AI also introduces due to its computational costs [00:03:54].
Moats, Defensibility, and Rapid Iteration
The question of “moats” and defensibility in AI often paralyzes companies from acting [00:04:46]. While large platforms like OpenAI could theoretically replicate any feature, they are limited by focus and cannot do everything [00:05:31].
“If you’re fast enough, you can jump into the street, grab the dime, jump out of the street… picking up these use cases one after the other” [00:05:51].
However, the risk is being “flattened” if one “trips and falls” [00:06:03]. Ries notes that the tech industry has seen platform wars before, and understanding industry analysis and strategic options used to be a larger part of startup building [00:06:10].
Despite these concerns, Ries aligns with Arvin from Perplexity.ai, stating that it’s easier to talk oneself out of doing anything than to build something customers want [00:06:51]. While analysis is useful and some ideas can be ruled out (e.g., pure exploitation), passion for an idea should not be deterred by external skepticism [00:07:03].
Crucially, this uncertainty necessitates building a company in a way that allows for rapid iteration and pivoting [00:07:37]. Founders must be alert to the possibility that their assumptions are wrong [00:07:42]. The pace of change in AI makes rapid adaptation and feedback more important than ever [00:08:27]. The industry is “speedrunning the usual hype cycle,” with companies that didn’t adapt already facing “serious trouble” [00:08:51].
Answer AI’s R&D Lab Approach
Eric Ries and Jeremy Howard founded Answer AI as a for-profit R&D lab, aiming to be the “Bell Labs of AI” [00:00:00]. Their mission is to maximize the public benefit of AI for as many people as possible, similar to Jeremy Howard’s previous venture, Fast.ai [00:09:34].
Addressing Accessibility and Centralization
Howard notes that Fast.ai was limited by requiring a strong coding background, restricting access to less than 1% of the world’s population [00:12:18]. Answer AI seeks to make AI more accessible through natural language and other modalities [00:12:40]. This also counters a perceived acceleration of centralization of power in AI, which Fast.ai was created to address [00:13:01].
Integrating Research and Application
Ries clarifies that Answer AI’s R&D lab model is not antithetical to Lean Startup. Instead, it reflects a belief that the best research happens when the researcher is “coupled to the application” [00:16:04]. This contrasts with the modern hyperspecialization that separates research and development, often leading to scientific breakthroughs that lack customer value [00:15:35]. Answer AI aims for a continuous iteration loop, from customer feedback to scientific inquiry and back, allowing for “tremendous breakthroughs” [00:18:19].
Focus on Resource Constraints and Practicality
Answer AI’s research thesis is an “overinvestment in training foundation models from scratch” and on expensive hardware, with an “underinvestment on like the real world which is resource constrained” [00:25:40]. Their work on efficient fine-tuning of Llama 3 exemplifies this, drastically reducing costs and making AI more accessible [00:26:49].
“A difference in degree becomes a difference in kind” [00:27:17]. Reducing inference costs doesn’t just improve margins; it makes new applications possible and allows for “continuous fine-tuning” of individual agents for hyper-personalization and context [00:27:46].
Ries compares this to Thomas Edison’s obsession with the “manufacturability” of electricity applications, stressing that “we have too many splashy demos and not enough deployed products” [00:29:19]. Practicality, deployability, cost, and usability are key [00:29:57].
Application Areas
Answer AI is particularly excited about applying AI in law and education due to their language-based nature and potential for societal benefit [00:34:46].
- Law: The law is often used as a “weapon by wealthy people and organizations” [00:35:36]. Reducing the cost of high-quality legal advice can combat injustice and gatekeeping [00:36:14].
- Education: There are many opportunities to improve education, allowing more people to achieve their potential by removing constraints [00:37:24].
Overhyped vs. Underhyped AI (from a Lean Perspective)
- Overhyped: AI agents [00:48:34]. Jeremy Howard believes current agent capabilities are not compatible with the mathematical foundations of language models, especially for novel planning sequences not present in training data [00:48:40].
- Underhyped: Resource efficiency [00:48:36]. This is critical for wider accessibility and enabling new use cases.
Challenges in AI Adoption and AI Safety
Ries suggests that the primary issue isn’t with foundation model labs seeking AGI, but with the “fundraising gravity” that pushes entrepreneurs away from practical utility towards “science fiction and speculative stuff” [00:45:22]. This leads to a lack of focus on real-world applications that don’t require AGI [00:45:14].
Using large frontier models for applications that could be handled by smaller, properly fine-tuned, and inherently safer models leads to “a lot of unsafe things to happen” when connected to real-world systems [00:46:05]. The lack of options for intrinsically safe applications means people will default to unsafe choices [00:46:38].
Jeremy Howard expresses concern about proposed legislation like California’s SB 147, which aims to ensure the safety of AI models [00:38:22]. His research indicates such policies would likely be ineffective and even counterproductive, leading to a less safe situation [00:40:01].
Models are a “purely dual use technology,” similar to a pen, paper, or calculator [00:40:32]. You cannot ensure the safety of the model itself because it can be fine-tuned or prompted to do anything once released [00:40:55]. Regulations that mandate ensuring model safety would, in practice, prevent the release of raw models, only allowing products (like ChatGPT) built on top of them [00:41:37].
“Models in their raw form are much more powerful because you can fine-tune them, you can study their weights… you can really control them” [00:42:42].
Restricting access to raw models makes them “extremely rival risk good,” jealously guarded by big states and companies, fostering competition rather than transparency [00:43:03]. This centralization of power hinders open-source research and the development of defensive applications like improving cybersecurity or vaccines [00:43:40].
Reestablishing Customer Connection in Foundation Labs
If Eric Ries were running a foundation model company, he would try to reestablish the connection between research and the customer [00:47:40]. This would involve taking responsibility for customer success and actively observing customer needs, much like the Toyota Production System [00:47:52]. This approach can help align research efforts with real-world value and avoid the “schizophrenic” split often seen between commercial and safety agendas in larger organizations [00:47:06].
Potential Breakthroughs
Two breakthroughs that could significantly alter the landscape of AI:
- Reduced Energy Requirements: A breakthrough in the massive energy and resource requirements of AI models [00:50:34].
- Advanced Planning and Reasoning: A breakthrough in planning and reasoning capabilities that moves beyond “subgraph matching” and the current “auto-regressive” word-picking approach [00:51:14]. This could involve approaches like Yann LeCun’s “Jeeper-based models” or diffusion models for text [00:51:25].
Ries also muses on the possibility of a breakthrough in understanding human cognition itself, suggesting that current LLMs might be a brute-force, highly inefficient way to emulate cognition [00:53:20], similar to building a calculator out of Minecraft blocks by trying every combination [00:53:51]. If a direct algorithm for cognition were discovered, it would be a “breakthrough” that would “shake my beliefs quite a bit” [00:54:23].