From: redpointai

Jonathan Frankle, Chief AI Scientist at Databricks, has a keen interest in policy and the societal implications of AI. He believes it is crucial for society to feel comfortable using AI in sensitive fields such as medicine and self-driving vehicles [00:00:24].

Key Considerations for AI Applications

Frankle highlights two main patterns for AI applications where they have found product market fit:

  1. Situations where perfection is not required: These include brainstorming, creative applications, marketing, and media, where there are many acceptable answers and errors are not critical. An example is Glean, which helps surface information without needing to be perfect [00:19:50].
  2. Scenarios where AI outputs are checked by humans: AI can propose answers to problems where human generation is costly but human checking is quick. An example is co-pilots for coding, where it’s harder for a human to write code from scratch than to verify AI-generated code [00:20:14]. This also applies to customer support [02:28:29].

However, Frankle cautions against applying AI in high-stakes legal scenarios where details are critical, as checking such outputs can be as time-consuming as producing them manually [00:20:27].

Trust and Explainability

Frankle emphasizes that AI’s “fuzziness” is both a superpower and a challenge [00:22:35]. While AI can perform tasks like document parsing without complex regular expressions, its unpredictability means users may not always get expected results [00:22:47].

A major hurdle for societal acceptance of AI, especially in fields like healthcare and autonomous vehicles, is the lack of intuition about when AI systems will fail. Unlike humans, whose errors we can often rationalize, AI mistakes can feel “inexplicable and unpredictable” [02:29:19]. For example, autonomous vehicles might struggle in situations where a human driver would also struggle, but sometimes they misidentify objects in ways a human would not [02:52:51].

Policy Participation and Transparency

Frankle asserts that those in the AI field, as experts on these systems, have a responsibility to ensure they are used responsibly and to participate in policy conversations. This participation should not be solely driven by self-interest, but by a commitment to society [02:53:08]. Building trust is paramount in policy discussions, which requires integrity and transparency, especially given the inherent financial interests of companies [02:53:54].

He stresses the importance of “scientific honesty,” clearly stating what is known and unknown about AI capabilities. This honesty builds trust and helps manage expectations, preventing over-promising future capabilities like “super intelligence” within specific timeframes [02:57:46].

Regulation and Policy Implications

Frankle advocates for a nuanced approach to AI policy and regulation implications. He believes that it is acceptable, and sometimes necessary, to decide not to use AI systems in certain contexts if they are not reliable enough. For instance, in law enforcement applications of facial recognition, the risks to an individual’s life and liberty are too high for potentially unreliable systems [02:55:21].

He suggests that evaluating AI systems may also lead to reassessing human performance standards. For example, the scrutiny on facial recognition AI revealed that humans are “really bad at this” in certain situations [02:57:46].

While advocating for caution in high-stakes areas like law enforcement, medicine, and autonomous vehicles (where mistakes can be life-threatening) [03:00:13], Frankle generally supports allowing innovation in most areas. He advises against blanket regulation, preferring a context-by-context and application-by-application approach to AI policy and regulation implications [02:56:24].