From: redpointai

Eric Reese and Jeremy Howard are building the “Bell Labs of AI” through their company, Answer AI. Their goal is to build smaller, cheaper, and more affordable AI models and applications, particularly in the legal and education sectors [00:00:05].

Challenges in AI Development and Adoption

A current trend in the AI world involves significant funding rounds and substantial spending on models and compute resources long before products reach the market [00:00:57]. This contrasts with traditional Lean Startup principles.

While AI applications often produce “unbelievably good demos” that can be convincing, it is crucial to test products with actual customers [00:01:36]. Companies are too readily applying the Software as a Service (SaaS) stack model to AI, which may not be suitable [00:02:19]. The economics of AI are vastly different from traditional software, bearing more resemblance to physical manufacturing, deep-sea oil drilling, or nuclear power plants, which involve significant infrastructure and operating costs [00:03:41].

This approach often pushes the product-market fit question to different layers of the supposed “stack” [00:02:45]. Many AI companies, especially those providing APIs, assume their customers will define the product-market fit with end-users, leading to a disconnect multiple layers deep between the model and the final product [00:03:03]. It is essential to understand the “end-end-end customer” regardless of where a company operates in the stack [00:03:21].

Despite the theoretical risk of large platforms “nuking” smaller players, these giants cannot focus on everything simultaneously, creating opportunities for specialized applications [00:05:23]. However, there is a tendency for fundraising gravity to push entrepreneurs towards science fiction and speculative ventures rather than practical utility [00:45:26].

Answer AI’s Approach to Accessibility

Jeremy Howard’s prior initiative, fast.ai, aimed to maximize the public benefit of AI for as many people as possible [00:09:42]. However, fast.ai was “hamstrung” because its resources and software required a strong coding background, restricting access to less than 1% of the world’s population [00:12:18]. Answer AI seeks to overcome this by leveraging natural language and other natural modalities (like vision) to make AI more accessible [00:12:40].

A core concern for Answer AI is to counteract the potential for AI to lead to massive centralization of power and decreased opportunities [00:13:07]. Answer AI is structured as a for-profit R&D lab, an unconventional model that doesn’t fit the typical startup mold with clear proprietary technology or a defined five-year financial plan [00:14:35].

The company aims to reintegrate research and development, believing that the best research occurs when the researcher is closely connected to the application [00:16:02]. This approach encourages continuous feedback from customers to scientific inquiry and back [00:16:21].

Focus on Cost Reduction and Efficiency

Answer AI believes there is an overinvestment in training large Foundation models from scratch and using expensive hardware [00:25:40]. They prioritize addressing the “real world,” which is resource-constrained [00:25:52]. Their breakthrough in efficient fine-tuning of Llama 3 exemplifies this, demonstrating how cost can be significantly reduced, making AI more accessible [00:26:18].

A “difference in degree becomes a difference in kind” when costs are dramatically reduced [00:27:17]. Cheaper inference costs not only improve margins but also enable new applications that are currently too expensive [00:27:52]. For example, sufficiently low costs could enable continuous fine-tuning or “continuous pre-training” of individual AI agents on inexpensive virtual machines, leading to hyper-personalization and persistent memory for AI agents [00:28:45].

This focus aligns with Thomas Edison’s approach, prioritizing practical application and solving obstacles to make technology deployable and usable [00:29:17]. There is a need for more “deployed products” rather than just “splashy demos” [00:29:54].

Answer AI sees significant opportunities in law and education due to their heavy reliance on language [00:34:51].

  • Legal Sector: The law is often used as a “weapon” by the wealthy, creating injustice [00:35:36]. Reducing the cost of high-quality legal advice can make the law more equitable and accessible to those with fewer resources [00:36:14]. AI can help break down “gatekeeping” mechanisms in regulated markets [00:36:43].
  • Education Sector: There are many opportunities to improve education, especially by overcoming the constraints of a “one-size-fits-all” system [00:37:22]. AI can help personalize learning paths, enabling more people to build and achieve their goals [00:37:59].

Policy Implications of AI Advancements

Jeremy Howard raised concerns about proposed legislation, such as California’s SB 147, which aims to ensure the safety of AI models [00:39:02]. He argues that regulating the “safety” of AI models themselves, which are dual-use technologies like a pen or a calculator, is ineffective and potentially counterproductive [00:40:01].

Such policies could:

  • Prevent Model Release: If a company must “ensure the safety” of a raw model, it effectively means they cannot release the model, only products built on top of it [00:42:31].
  • Centralize Power: This makes raw models an “extremely rivalrous good” and a “jealously guarded secret,” accessible only to large states and corporations [00:43:08].
  • Reduce Transparency: It limits the ability to study how models work, hindering defensive applications like cybersecurity or vaccine development [00:43:57].

Jeremy suggests that allowing open-source models and the ability to fine-tune them leads to a safer ecosystem because it enables a wider class of intrinsically safe applications and prevents default reliance on potentially less safe frontier models [00:46:32].

Future Breakthroughs

Key breakthroughs that would significantly impact the field include:

  • Energy and Resource Efficiency: A major reduction in the massive energy and other resource requirements for AI models [00:50:34].
  • Advanced Planning and Reasoning: A breakthrough in AI’s planning and reasoning capabilities that moves beyond current subgraph matching, potentially through approaches like “Jeeper-based models” or diffusion models for text [00:51:14].

The hosts note the changing perception of human intelligence, suggesting that more of it is encoded in language than previously thought [00:52:38]. There’s a possibility that current large language models (LLMs) are a “brute force” and inefficient way of discovering critical algorithms in cognition, and a direct breakthrough in understanding human cognition could revolutionize AI development [00:53:12].

Conclusion

The emphasis on cost efficiency and accessibility, rather than solely pursuing cutting-edge AGI, allows for the development of practical applications using current AI capabilities [00:56:30]. Industries like legal and education, with their clear language-in, language-out processes, offer fertile ground for such applications [00:57:03]. The importance of understanding end-user incentives and rapid iteration remains paramount in navigating the complex AI landscape [00:57:16].