From: redpointai

AI policy and regulation are critical considerations in the evolving landscape of artificial intelligence. Arthur Mensch, CEO and co-founder of Mistral, shared his perspective on how these areas should evolve and the challenges associated with current approaches [00:17:04].

Approach to AI Safety and Regulation

Mensch believes that AI safety should be addressed through a product safety perspective, similar to how software safety has been managed [00:17:19]. This approach focuses on evaluating the product’s expected functionality and ensuring it performs as intended [00:17:25].

The EU AI Act

Initially, the EU AI Act aligned with this product safety view [00:17:32]. However, lobbying efforts led to the introduction of technology-specific regulations, including forced evaluation and “red-teaming” based on flop thresholds for large language models (LLMs) [00:17:48].

While manageable for companies like Mistral—who already red-team and evaluate their models [00:17:58]—Mensch argues this approach is ill-directed [00:18:32]. LLMs, like coding languages, can be used for various purposes, making it difficult to certify product safety solely based on model evaluation [00:18:07]. The core problem of ensuring an AI product is safe remains unsolved because it requires rethinking continuous integration and verification for stochastic models [00:19:17].

Transparency of Training Data

Discussions around transparency of training data sets are ongoing, with a caveat for protecting trade secrets due to the competitive landscape [00:18:45]. Similar discussions are evolving in the US [00:19:00].

Regulating the Application Layer

Mensch suggests that policymakers should pressure application makers to verify that their AI solutions effectively solve the intended task [00:21:17]. This would create a “second-order pressure” on foundational model makers to provide tools and models that can be effectively controlled and verified by application developers [00:21:34].

This approach promotes healthy competition, as application makers would choose models that offer the best control [00:22:21]. In contrast, directly regulating the technology can favor large players who have the resources to influence regulators and standard-setting bodies [00:22:43].

Global and Geopolitical Implications

The emergence of foundation models for different countries, such as in India and Japan, is a notable trend [00:23:01]. Mensch believes that enabling countries and developers to deploy AI technology where they want is crucial, with portability being Mistral’s approach to national sovereignty [00:23:23].

Another critical aspect is language. Current models perform significantly better in English than in other languages [00:23:41]. Mistral aims to create models that are proficient in every language, starting with French [00:23:51]. This focus on multilingualism ensures that generative AI benefits the entire world and that the technology is ubiquitous [00:24:08].

While it’s technologically optimal to have a few global LLM providers, the political question of countries wanting their own homegrown AI companies remains [00:25:02]. Mensch suggests that if companies provide portable and multilingual technology, allowing countries to modify and control it, this should address sovereignty concerns [00:25:13]. A situation with only a few companies offering Software-as-a-Service (SaaS) models would indeed pose a sovereignty problem, which many countries have already identified [00:25:41].