From: allin
The Biden Administration published an executive order (EO) on AI regulation on October 30th [00:47:04]. This long-anticipated order, spanning 111 pages, covers a broad range of matters with detailed terms [00:47:09].
Overview of the Executive Order
The executive order is not legislation and can be easily overturned by a future administration [00:47:24]. It primarily calls for voluntary actions from technology companies, asking them to submit their AI models, infrastructure, and tools for safety review [00:47:35]. A notable component of the EO is its emphasis on equity and inclusion, requiring AI models to account for diversity, equity, and inclusion [00:47:47].
The EO defines a “dual-use foundational model” as an AI model:
- Trained on broad data, generally using self-supervision [00:47:59].
- Containing at least tens of billions of parameters [00:48:04].
- Applicable across a wide range of contexts [00:48:05].
- Exhibiting high performance at tasks that pose a serious risk to national security, economic security, public safety, or any combination thereof [00:48:07].
Specific Directives and Requirements
The executive order includes directives for various federal agencies:
- National Institute of Standards of Technology (NIST): To develop standards for red team testing of AI models [00:58:56].
- Department of Commerce: To develop standards for labeling AI-generated content [00:59:04].
- Federal Trade Commission (FTC): To use existing regulations to legislate on AI [00:59:11].
- Federal Communications Commission (FCC): To consider Spectrum licensing rules through the lens of AI [00:59:17].
Additionally, the EO proposes regulations requiring “infrastructure as a service” providers (e.g., Azure, Google Cloud, Amazon) to submit reports if a foreign person uses their APIs to train a large AI model with potential malicious cyber capabilities [00:55:51]. It also calls for a “Chief AI Officer” in every Federal agency responsible for regulating the systems and methods of private companies building software [00:49:05].
Criticisms and Concerns
Critics argue that the EO creates as much confusion as clarity [00:47:19] and that much of its content should be legislated rather than imposed through an executive order [00:47:26].
A primary criticism is that the EO attempts to regulate systems and methods of AI development rather than focusing on regulating outcomes and applications [00:48:42]. This approach is considered problematic because it sets arbitrary standards, such as model size by parameters, that become quickly outdated given the rapid pace of AI progression [00:49:17]. An example given is the foundational paper for Transformer model development (crucial for LLMs), which was published in 2017 and widely adopted by 2018; a mere five years later, the technology has evolved dramatically [00:49:32].
“To come out and say here are the standards by which we want to now regulate you, this is the size that the model can be, these are the types of models, it’s going to look like medieval literature in three years, none of this stuff’s even going to apply anymore.” [00:50:00]
Other specific criticisms include:
- Watermarking AI content: This is seen as an outlandish infringement on First Amendment rights, as defining “AI content” is difficult given the widespread use of digital tools (e.g., Photoshop, auto-tune, CGI) in media creation [00:52:28].
- “Pre-crime” conviction: The EO is characterized as “convicting AI of a pre-crime” by forecasting potential harms before they occur, which could stifle innovation. This contrasts with past technological revolutions like the PC, where fears about job displacement or privacy infringement did not lead to prescriptive government guidance on industry development [00:54:01].
- Conflicting regulations: The government is simultaneously suing companies like SpaceX for not employing enough foreigners while proposing regulations that require reporting on foreign use of AI infrastructure [00:56:40]. This creates a situation where companies may be in non-compliance regardless of their actions [00:56:51].
- Bureaucracy and “regulatory capture”: The EO activates every government agency to create more bureaucracy [00:59:22], potentially leading to a “Brussels-style bureaucracy” [01:00:51]. This fragmented approach might force the industry to eventually advocate for a single, overarching federal agency to manage software, akin to the FDA for pharma or FCC for communications [01:01:03].
Impact on Innovation and Competitiveness
Critics argue that regulating the methods and systems of AI development in the United States could hinder its progress, allowing countries like India, China, and Singapore to gain a significant lead in model development and capabilities [00:50:16]. This could severely impact America’s global competitiveness in technology [01:05:00].
“The more our government actors step in and try and tell us what systems and methods we are allowed to use to build stuff, the more at risk we are of falling behind.” [00:50:32]
The unregulated nature of the U.S. software ecosystem, which has fostered innovation and attracted risk capital, is seen as a key factor in its technological leadership [01:05:51]. The EO, however, is perceived as a step towards making permission-less innovation, like two individuals starting a company in a garage, increasingly difficult [01:07:09].
Future Outlook
Despite the concerns about AI’s acceleration and potential risks, some believe the current regulatory efforts are premature. It’s suggested that the technology should be allowed to develop further and that problems should become more manifest before comprehensive regulation and oversight of AI is implemented [01:26:09]. This would also allow time for industry-led solutions to emerge [01:26:27]. While the EO does streamline immigration for those with critical AI technology skills [01:00:21], this benefit is offset by the complexity of other regulatory requirements.