From: redpointai

Jonathan Frankle, Chief AI Scientist at Databricks, has discussed the critical need for society to feel comfortable using AI in various high-stakes domains, including medicine and self-driving technologies [00:00:24]. He emphasizes that the development of AI policy requires participation from those in the AI field to ensure responsible use and societal well-being [00:53:00].

Understanding AI’s Limitations and Societal Comfort

A key challenge for societal comfort with AI stems from its inherent “fuzziness” and unpredictability [00:22:52]. While AI can unlock significant value, its lack of precise clarity, unlike deterministic code, means it may not always deliver expected results [00:22:56].

Frankle highlights that humans have developed good intuitions about when other humans might fail, such as with autonomous vehicles where human drivers’ potential weaknesses are understood [00:26:08]. This allows for rationalization and coping with mistakes [00:26:23]. However, with AI, these intuitions are lacking, and the technology’s failures can be “inexplicable and unpredictable” [00:26:50]. Building models of uncertainty is crucial for humans to make peace with AI’s potential shortcomings [00:27:08].

The lack of explainability when AI systems go wrong is a significant barrier to public trust [00:29:19].

Policy Considerations and Standards

Frankle, drawing on his experience in the policy world, argues that the high standards often applied to automated AI systems should also prompt a re-evaluation of human performance standards [00:27:31]. For example, in facial recognition, humans are often “really bad” at it, and the introduction of automated systems highlighted human fallibility, especially in recognizing people who do not look like them [00:27:43]. Similarly, autonomous vehicles are held to very high standards, which raises questions about how human drivers are assessed [00:28:38].

“It’s okay to sometimes say maybe this system is not reliable enough and we don’t want to use it in certain context.” [00:55:24]

Frankle believes that society needs to determine when to permit and when to prohibit AI systems [00:55:21]. In high-stakes areas, such as law enforcement’s use of facial recognition, the risks are substantial, potentially affecting someone’s life, liberty, or job due to a false match [00:55:30]. He cautions against blanket regulation and instead advocates for thoughtful, context-by-context and application-by-application consideration [00:56:23].

While allowing creativity and innovation in most areas of AI, extraordinary caution is advised where mistakes carry severe consequences, such as in:

Building Trust

Trust is paramount in policy implications of AI advancements [00:54:21]. Experts in the AI field should engage in these conversations not solely with self-interest, but with a commitment to public service [00:53:13].

“People know when you’re coming in with self-interest… trust is very hard won and easily lost.” [00:54:14]

Frankle stresses the importance of scientific honesty: clearly stating what is known and what is not known about AI capabilities and future developments [00:57:52]. This transparency helps build and maintain public trust, which is crucial given the high stakes involved in AI’s broader societal integration [00:57:56].