From: allin
Biden Administration’s Executive Order on AI
On October 30th, the Biden Administration published a long-anticipated executive order (EO) aimed at regulating AI [04:50:00]. This 111-page document covers a broad range of matters with specific and detailed actions, using technical terms of art [04:50:00]. However, it is an executive order, not legislation, meaning it can be easily overturned by the next administration [04:50:00]. The order largely seeks voluntary action from technology companies to submit their models, infrastructure, and tools for safety review [04:50:00]. It also includes an “equity and inclusion” component, requiring models to account for diversity, equity, and inclusion [04:50:00].
The EO defines a “dual-use foundational model” as an AI model trained on broad data, generally using self-supervision, containing at least tens of billions of parameters, applicable across a wide range of contexts, and exhibiting high levels of performance at tasks that pose a serious risk to security, national economic security, public safety, or any combination thereof [04:55:00].
Critiques of the Executive Order and AI Regulation
A significant problem identified with this executive order, and AI regulation more broadly, is its attempt to define and regulate “systems and methods” rather than “outcomes and applications” [04:59:00]. For example, instead of regulating fraud or false impersonation via software, the EO mandates a “chief AI officer” in every federal agency responsible for regulating the systems and methods of private companies building software [05:07:00]. This approach creates an “outlandish standard” that makes little sense given the rapid pace of AI progression [05:07:07].
Arbitrary Requirements and Lack of Coherence
Critics argue that the EO is “convoluted and confused,” containing “arbitrary requirements” [05:40:00]. An example given is a requirement for self-reporting to the government if a model reaches a certain number of parameters, which is deemed nonsensical [05:51:00]. Another concern is the proposed legislation on watermarking AI content [05:30:00]. Defining “AI content” is problematic, as digital processing is inherent in much of modern media (e.g., Photoshop, auto-tune, CGI) [05:32:00]. This watermarking requirement is seen as an “outlandish infringement on First Amendment rights” and demonstrates a lack of understanding of how software is currently used [05:27:00].
”Pre-crime” and Regulatory Capture
One critic likens the EO to convicting AI of a “pre-crime,” describing a “litany of horribles” that will occur unless the government guides its development [05:06:00]. This approach contrasts with the dawn of the PC revolution in the 1970s and 80s, where fears of computers taking over jobs or infringing on privacy did not lead to an executive order guiding microprocessor development [05:02:00]. If such guidance had been in place, the industry would not have achieved its full potential, leading to “extreme regulatory capture” where established companies would work with regulators to define rules that benefit themselves and exclude new entrants [05:33:00].
The EO’s directive for federal agencies to create their own regulations is expected to create more bureaucracy and burden technology companies [05:53:00]. This also creates authority for government agencies to access private servers and conduct audits, a move that is considered an overreach of government’s role [05:58:00].
Concerns about Innovation and Global Competitiveness
Risk of Falling Behind
A major concern is that if the U.S. market is not allowed to develop freely, countries like India, China, and Singapore will get “well ahead” in their model development and capabilities [05:21:00]. This could have a “major impact on our Global competitiveness” [06:00:00]. While America has historically led in new technology development (internet, mobile, cloud), this leadership is not due to government involvement but a vibrant private sector willing to invest risk capital [06:00:00].
Impact on Open-Source AI
The regulations are seen as targeting open-source software [05:31:00]. Companies like OpenAI, which is now closed-source, would benefit from pulling up the ladder before open-source software can gain momentum [05:31:00]. The existence of open-source alternatives like Llama 2 (though with user thresholds) and Mistral is highlighted as crucial for unencumbered growth [06:01:00].
Over-regulation and the “Federal Software Commission”
The current approach of activating every agency of the government to create more bureaucracy and regulations will make it burdensome for technology companies [05:53:00]. It is predicted that this will lead the industry to eventually “cry uncle” and beg for a single, rationalized regulatory agency [06:05:00]. This could result in a “Federal Software Commission,” similar to the FCC for communications or FDA for pharma, which would force software companies to seek “permission to Washington” [06:08:00]. This would jeopardize the “permissionless” nature of software development, which allows individuals to start companies easily [06:11:00].
Defining AI and its Current Stage
The term “AI” itself is questioned, as it’s seen as a rebranding of existing concepts like “math people,” “statisticians,” “data science,” “big data,” and “machine learning (ML)” [06:08:00]. At its core, AI is described as software-based algorithms using data, software, and statistics [06:08:00]. While Transformer models have led to improvements in statistical tools and algorithm development, it’s argued that these are extensions of processes ongoing since the 1960s [06:08:00].
The current excitement around AI is largely attributed to language models enabling computers to communicate and understand language, and generative tools creating digital outputs like art, video, and audio [06:25:00]. These are seen as powerful new software capabilities that open up new markets and business models [06:25:00]. However, the idea that this signifies true “intelligence” in the science fiction sense is dismissed [06:33:00]. The apparent “human and lifelike” nature of AI outputs is attributed to significantly improved “statistical guessing” capabilities, which are experiencing a 400x improvement from the baseline every year due to investment, hardware, and model improvements [06:39:00].
At present, there is no “proven use case that is 10x in productivity” [06:05:00] for AI. Given this, the current push for comprehensive regulation is considered “premature,” with the argument that the technology should be allowed to “bake” and its problems become more manifest before intervention [06:09:00].