From: redpointai

Answer AI, co-founded by Eric Ries and Jeremy Howard, aims to build the “Bell Labs of AI” by focusing on creating smaller, cheaper, and more affordable AI models and applications in sectors like legal and education [00:00:00]. This approach stands in contrast to common trends in the AI world, which often involve large funding rounds and significant spending on models and compute before market engagement [00:00:57].

Applying Lean Startup Principles to AI

Eric Ries notes that the principles of “The Lean Startup” are still deeply relevant in the AI industry [00:01:51]. While AI demos can be “magical,” leading companies to believe they don’t need to test with customers, the fundamental truth remains: it’s impossible to know in advance what customers will want [00:01:36]. Experimentation and discovering customer needs through “revealed actions” are crucial [00:02:12].

A significant challenge arises when the traditional SaaS stack is “copy-pasted” to AI, assuming similar structures and economics [00:02:23]. Many AI companies building APIs assume their customers will define product-market fit, leading to a potential disconnect of “two, three, or four layers deep between the model and the end product” [00:03:05]. Ries emphasizes the importance of understanding the “end, end, end customer” regardless of one’s position in the stack [00:03:23].

The economics of AI are “completely different” from traditional software, drawing more parallels to physical manufacturing, deep-sea oil drilling, or nuclear power plants due to real infrastructure and operating costs [00:03:41]. This amplifies risk rather than reducing it [00:04:00]. Despite the potential for defensibility issues (moats) in a rapidly evolving field, a strong vision combined with preparedness to pivot and rapid iteration are essential [00:06:51].

Answer AI’s R&D Lab Model and Efficiency Focus

Answer AI operates as a for-profit R&D lab, an unusual structure in modern venture capital [00:14:35]. This model prioritizes integrating research (“R”) and development (“D”), believing that the best research is conducted when the researcher is “coupled to the application” [00:16:02]. This continuous feedback loop from customer needs back into scientific inquiry drives breakthroughs [00:16:21].

Answer AI’s focus on resource efficiency is a core mission [00:25:52]. They believe there’s an “overinvestment in training Foundation models from scratch” and in using “gold-plated super expensive hardware,” leading to an “underinvestment on like the real world, which is resource constrained” [00:25:40]. Their Llama 3 fine-tuning breakthrough, achieved by one researcher, Karam, exemplifies this focus, demonstrating how to significantly reduce costs and improve accessibility [00:24:10].

For many large AI labs, simply making something cheaper is “tedious” [00:27:00]. However, Jeremy Howard and Eric Ries argue that a “difference in degree becomes a difference in kind” [00:27:11]. Reducing inference costs doesn’t just improve margins; it makes entirely new applications possible [00:27:23]. The software industry is now dealing with physical supply chain and power constraints, making efficiency optimizations critical [00:27:30].

The Vision of Continuous Fine-Tuning

A future where it’s “so cheap to fine-tune a model that you can do it continuously” is envisioned [00:27:58]. This contrasts with current “amnesiac models” that lack memory [00:28:06]. Continuous pre-training of individual agents on inexpensive virtual machines could unlock hyper-personalization and context-aware use cases, where dedicated resources per customer become feasible [00:28:45].

This focus on “manufacturability” and practical deployment over “splashy demos” is compared to Thomas Edison’s approach to electricity: making the technology practical and usable for widespread application [00:29:15].

Applications for Societal Benefit

Answer AI prioritizes areas where the ability to compute and interact with language can have clear societal benefits [00:34:33]. Two key areas are:

  • Law: Described as a “very large language model” (text in, text out), the law is often used as a “weapon by wealthy people and organizations” against less wealthy ones [00:35:05]. By bringing down the cost of high-quality legal advice, AI can combat this injustice and reduce gatekeeping in regulated markets [00:36:12].
  • Education: As a homeschooling dad, Jeremy Howard sees immense opportunities to improve education [00:37:07]. AI can help remove the “constrained environment” where all children follow the same path, enabling more personalized learning and allowing people to “be the people they want to be” [00:37:48].

AI Safety, Centralization, and Openness

Jeremy Howard expressed concerns about proposed AI safety regulations, such as California’s SB147, which aims to ensure the safety of AI models [00:39:02]. He argues that such policies, while well-intentioned, could be “uneffective” and even cause the “opposite result,” creating a less safe situation [00:40:01].

The core issue is that AI models, like pens or calculators, are “dual-use technology” [00:40:32]. It’s impossible to ensure their safety in a way that prevents misuse if the raw model is accessible [00:40:46]. Regulatory attempts to ensure safety by restricting model releases (allowing only products on top of them) would transform models into “extremely rivalrous goods” [00:43:10]. This would lead to massive centralization of power and reduced transparency, hindering independent study and defensive applications like cybersecurity or vaccine development [00:43:46].

Instead of focusing solely on frontier AGI models, which can pose safety risks when deployed, Eric Ries advocates for building a “huge class of valuable applications that are intrinsically safe” using smaller, properly fine-tuned models [00:46:32]. He believes that if these safer options aren’t provided, people will default to less safe alternatives [00:46:38].

Overhyped, Underhyped, and Future Breakthroughs

Jeremy Howard considers “agents” to be overhyped because current attempts to use them are often “not compatible with the mathematical foundations of the models” [00:48:40]. Conversely, he sees resource efficiency as underhyped [00:48:38].

For future breakthroughs that could fundamentally change AI, two areas are highlighted:

  • Energy and Resource Requirements: A breakthrough in reducing the “massive energy requirements” of models would overcome a direct “economic or even physical obstacle” [00:50:34].
  • Planning and Reasoning Capabilities: A breakthrough that moves beyond “subgraph matching” (the current “auto-regressive” word-by-word generation) into true planning and reasoning, possibly through approaches like “jeeper-based models” or “diffusion models for text,” would be transformative [00:51:14].

Eric Ries adds that a breakthrough in understanding human cognition itself, revealing a more efficient way to build intelligence than the current “brute force” methods (like building a calculator out of Minecraft blocks by trying every combination), would be mind-blowing [00:53:12].

More information can be found at Answer.AI [00:54:42].