From: redpointai

Arthur Mensch, CEO and co-founder of Mistral, shared his perspective on AI policy and regulation, particularly regarding the EU AI Act and the broader future of AI oversight [00:17:04].

Approach to AI Safety and Regulation

Mistral’s position is that AI safety and regulation should be addressed from a “product safety perspective” [00:17:17]. This approach mirrors how software safety has traditionally been managed, focusing on the end product, its expected performance, and methods for evaluation [00:17:21].

Mensch believes that regulators should apply pressure on application makers to ensure their products function correctly and safely [00:21:12]. This would create a “second order pressure” on foundational model developers, as application makers would demand models that are easier to control and verify for specific tasks [00:21:34]. He suggests providing application makers with evaluation tools and methods for continuous integration and verification [00:22:01].

Mensch highlighted that the current trend in regulation, influenced by lobbying, introduces “technology regulation” based on factors like flop thresholds, which he views as an “ill-directed burden” [00:17:48]. While manageable for Mistral as they already perform evaluations and documentation, he argues it doesn’t solve the core product safety problem [00:18:02].

The challenge lies in making an AI product safe, which is a difficult problem given the stochastic nature of models [00:19:22]. He emphasizes that rethinking evaluation and continuous verification is a technological and product problem, not primarily a regulatory one [00:19:47].

Furthermore, he expresses concern that direct regulation on technology, rather than applications, favors larger players who can deploy “an army of lawyers” to influence regulators and standard-setting bodies, hindering healthy competition [00:22:40].

Transparency of Training Data

There are ongoing discussions around the transparency of training data sets, which Mistral would like to enable, with the caveat of needing to protect “trading secrets” due to the competitive landscape [00:18:45]. This issue also applies to regulations evolving in the US [00:18:59].

Global and Geopolitical Implications

The emergence of foundation models for specific countries (e.g., India, Japan) suggests a trend toward localized AI capabilities [00:23:03]. Mensch believes that enabling countries and developers to deploy AI technology where they want — through “portability” — is the best approach to sovereignty [00:23:25].

The importance of language-specific models is also highlighted, as current models perform significantly better in English [00:23:43]. Mistral aims to create models that excel in every language, starting with French, which is largely handled at the pre-training stage [00:23:51]. Mistral’s strategy is to be a global, portable, and multilingual company, ensuring their technology is ubiquitous [00:24:14].

Mensch considers the proliferation of country-specific LLM companies more of a political than a technological question [00:25:00]. If countries can access and modify technology as desired, it should foster confidence and control [00:25:13]. However, if only a few companies offer AI as a Software-as-a-Service (SaaS), a “sovereignty problem” arises, which many countries have already identified [00:25:41].