From: redpointai

Answer AI, founded by Eric Ries and Jeremy Howard, aims to be the “Bell Labs of AI” [00:00:01]. The company focuses on building smaller, cheaper, and more affordable AI models, as well as developing applications in sectors like legal and education [00:00:05].

Founding Philosophy: A For-Profit R&D Lab

Answer AI operates as a for-profit R&D lab, a structure reminiscent of early industrial labs like Thomas Edison’s [00:14:35]. This approach stands in contrast to the traditional startup model, especially when considering questions of proprietary technology or product-market fit in a nascent field [00:14:42].

A core belief of Answer AI is that the best research occurs when the researcher is deeply coupled to the application, creating a continuous feedback loop from the customer back to the scientific inquiry [00:16:02]. This contrasts with the hyper-specialization seen in modern academia and industry, where research (R) and development (D) are often separated [00:15:35]. This integrated approach allows for breakthroughs driven by real-world customer needs, preventing researchers from solving problems irrelevant to the market [00:17:09].

Funding and Investors

Initially, the founders were unsure who would fund such a unique R&D lab [00:14:29]. However, they found support from investors who understood their vision, including a major AI safety advocate [00:18:43]. Traditional investors, accustomed to the Software-as-a-Service (SaaS) stack and clear business models, often struggled to understand Answer AI’s approach [00:19:50].

Team Culture and Exploration

Answer AI employs a “Long Leash with Narrow Fences” approach to R&D, meaning they define a broad research thesis (the fences) and give researchers freedom to explore within those bounds (the long leash) [00:25:24]. This fosters a team with a strong intuition for what AI technology can achieve [00:32:21].

To encourage exploration and deep engagement with the AI ecosystem, every team member receives a $500 monthly credit card to purchase and use any AI-related products [00:33:11]. This leads to valuable discussions about product strengths and weaknesses, fostering appreciation for both research and commercial opportunities [00:33:41].

Applying Lean Startup Principles to AI

Eric Ries, author of “The Lean Startup,” notes that many companies in the AI world are spending large sums on models and compute before ever engaging with the market [00:00:57]. This is partly due to the “magical” nature of AI demos, which can lead founders to believe customer testing isn’t necessary [00:01:36]. Ries asserts that fundamental principles of the Lean Startup still apply:

  • Customer Needs: It’s impossible to know in advance what customers want [00:01:56]. Companies must discover this through experimentation and “revealed actions” [00:02:12].
  • Beyond the SaaS Stack: Many AI companies incorrectly try to “copy-paste” the SaaS stack to AI, assuming similar business models [00:02:22]. This can lead to a disconnect where AI API providers assume their customers will achieve product-market fit, even if the value chain is multiple layers deep from the model to the end-user [00:02:50].
  • Understanding End Customers: Regardless of where a company sits in the “stack,” it’s crucial to understand the ultimate end customer and their needs to ensure the product achieves product-market fit [00:03:21].
  • Economic Differences: The economics of AI differ significantly from traditional software, resembling physical manufacturing or deep infrastructure projects due to high computational and operating costs [00:03:41]. This amplifies risk, requiring adaptability [00:04:00].
  • Product-Market Fit First, Moats Later: While defensibility (moats) is a concern, Eric Ries agrees with Arvin from Perplexity that companies should first focus on building something customers want, earning the right to think about moats later [00:05:03]. Trying to pick “dimes in front of a steamroller” (small opportunities near large platforms) can work if fast enough, but carries significant risk [00:05:19].
  • Rapid Iteration and Pivoting: Given the high uncertainty in AI, continuous rapid iteration and the ability to pivot are more important than ever [00:07:39]. Companies that skip this step are already running into trouble, accelerating the industry hype cycle [00:08:44].

Focus on Cost Reduction and Practical Applications

Answer AI aims to reduce the cost of AI development and deployment, specifically by “bringing the price down by 10x” [00:26:50]. This focus on resource efficiency is often seen as “tedious” by larger labs obsessed with pure performance [00:26:59].

However, reducing cost can create a “difference in kind,” not just degree [00:27:11]:

  • Enabling New Use Cases: Lower costs make previously impossible applications viable due to constraints like physical installation, power access, and HBM memory manufacturing [00:27:30].
  • Continuous Fine-tuning: If fine-tuning models becomes cheap enough, it enables “continuous pre-training” of individual agents. This allows for hyper-personalization, context, and memory, overcoming the “amnesiac” nature of current models that constantly forget previous interactions [00:28:01].

Answer AI’s work on efficient fine-tuning of LLaMA 3 is an example of this [00:23:53]. This breakthrough, led by one researcher, demonstrated how combining quantization and distributed computing could dramatically improve the accessibility and reduce the cost of fine-tuning [00:32:44].

Answer AI is particularly excited about the law and education sectors because they are heavily language-based and offer significant opportunities for societal benefit [00:34:43].

Law

The legal system is often used as a “weapon” by wealthy individuals and organizations against less wealthy ones, creating injustice [00:35:31]. By significantly reducing the cost of high-quality legal advice, AI can democratize access to justice and counter gatekeeping practices [00:36:12].

Education

Education has immense potential for improvement. Current systems often force students through the same path, limiting customization [00:37:48]. AI can enable more personalized learning experiences, allowing more people to pursue their passions and build what they envision [00:37:22].

Views on AI Safety and Regulation

Jeremy Howard expressed concern about proposed legislation, such as California’s SB 147, which aims to regulate the safety of AI models [00:39:02]. While well-intentioned, such policies could be ineffective or even counterproductive, leading to a less safe situation [00:40:01].

The fundamental issue is that AI models, in their raw form, are “dual-use technology,” similar to a pen, paper, or calculator [00:40:32]. It’s impossible to ensure the safety of the model itself, as users can fine-tune or prompt it to do anything they desire [00:40:55].

Strict regulation on model release would mean:

  • Restricted Access: Models in their raw form (like LLaMA 3) would likely not be released, only layered products (like ChatGPT) that offer limited user control [00:42:37].
  • Centralization of Power: This turns models into “extremely rivalrous goods,” accessible only to large states and corporations [00:43:10]. This fosters a competitive race for bigger models without transparency or external scrutiny [00:43:46].
  • Hindering Defensive Uses: Open access to models allows many people to use them for beneficial, defensive purposes, such as improving cybersecurity or developing vaccines [00:43:34]. Restricting this access could ironically make the world less safe [00:46:05].

Eric Ries suggests that foundation model labs, while pursuing AGI, could benefit from re-establishing connections between their research and customers [00:47:40]. Many valuable applications don’t require AGI and could be built with smaller, safer models [00:45:17]. If these safer options aren’t available, users might default to potentially unsafe frontier models for tasks [00:46:37].

Overhyped and Underhyped Aspects of AI

  • Overhyped: AI agents, as current models’ mathematical foundations are not compatible with the novel planning many attempt to do with them [00:48:34]. Agents are strong for mixes/matches of training data, but not novel planning sequences [00:22:31].
  • Underhyped: Resource efficiency, which can dramatically increase accessibility and unlock new use cases [00:48:38].

Breakthroughs That Could Change Perceptions

Two major breakthroughs could significantly alter current understanding of AI:

  1. Reduced Energy Requirements: A breakthrough in the amount of energy or other resource requirements for AI models would be huge [00:50:31].
  2. Advanced Planning/Reasoning: A breakthrough in planning and reasoning capability that goes beyond subgraph matching (the current “auto-regressive” word-by-word approach) [00:51:10]. Approaches like Yann LeCun’s “Jeeper-based models” or diffusion models for text could achieve this [00:52:12].

A profound realization might also come from understanding that the problem isn’t just in scaling LLMs, but in our understanding of human cognition itself [00:53:20]. Current methods might be a “brute force” way of finding critical cognitive algorithms, and a more direct approach could be a true breakthrough [00:54:11].

For more information, visit answer.ai [00:54:42].