From: allin
OpenAI, co-founded by Sam Altman in 2016, set out with the goal of ensuring that artificial general intelligence (AGI) benefits all of humanity [00:01:03]. Sam Altman, who previously served as president of Y Combinator from 2014 to 2019, joined OpenAI full-time as CEO in 2019 [00:01:09].
Key Milestones and Products
A significant turning point for OpenAI occurred on November 30, 2022, with the launch of Chat GPT [00:01:14]. This product became reportedly the fastest to reach 100 million users in history, achieving this milestone in just two months [00:02:08]. In January 2023, Microsoft invested 2 billion in annual recurring revenue (ARR) last year [00:02:18].
Other notable developments from OpenAI include:
- GPT-4: This model has seen significant improvements, particularly in recent months [00:03:06]. OpenAI is working to make GPT-4 level technology available to free users, though it remains very expensive [00:05:04].
- Sora: This video model generates “amazing moving images” [00:31:15]. Sora does not start with a language model, but is customized for video [00:32:11].
Sam Altman’s Brief Departure and Return
In November 2023, Sam Altman was briefly fired from OpenAI over a “crazy 5-day span” [00:01:25]. This event led to widespread speculation, including theories that the team had reached AGI and that “the world was going to end” [00:01:36]. Within a few days, he was reinstated as CEO [00:01:42]. Altman confirmed he was fired and considered his options, but ultimately returned due to his love for OpenAI and its people [00:53:02]. He respects the former board’s commitment to AI safety, despite disagreeing with their decision-making [00:56:43].
The board’s composition, with a majority of disinterested directors, played a role in the structure that led to Altman not having equity in OpenAI [00:59:45]. This lack of equity sometimes leads to “weird questions” about his motivations [00:59:59]. He also clarified that projects like device companies or chip fabrication companies, which he is reportedly involved in, would be under OpenAI’s equity, not his personal ventures [01:01:27].
Future of AI Development
GPT-5 and Model Evolution
OpenAI takes its time with major model releases [00:02:44]. While there are reports of GPT-5 launching in the summer, Sam Altman notes that it might not even be called GPT-5 [00:03:01]. He suggests a future where AI systems continuously improve rather than through discrete version numbers (1, 2, 3, 4, 5) [00:03:17]. This continuous improvement is seen as technologically better and easier for society to adapt to [00:03:32].
Open vs. Closed Source
Altman believes there are “great roles for both” open and closed source models [00:07:10]. OpenAI’s primary mission is to build towards AGI and broadly distribute its benefits [00:07:17]. He is particularly interested in an open-source model that can run effectively on a phone [00:07:41].
The initial rationale for OpenAI being open was the belief that AI was “too important for any one company to own” [00:09:34]. Later, the perception shifted to it being “too dangerous for anybody to be able to see it,” leading to a more closed approach [00:09:43]. However, Altman argues that releasing Chat GPT was a way to make the world “see this” and understand AI’s importance [00:10:12].
Cost and Latency
Reducing the cost and dramatically cutting the latency of AI models are “hugely important” to OpenAI [00:06:16]. Altman is confident this will happen due to the early stage of the science and “engineering Tailwinds” [00:06:27]. He envisions a future where “intelligence [is] too cheap to meter and so fast that it feels instantaneous” [00:06:50].
AI Infrastructure
To achieve cheaper and faster compute, significant algorithmic gains are expected [00:14:22]. The entire supply chain for AI, including logic fab capacity, HBM manufacturing, data center construction, and energy, is complex and presents bottlenecks [00:14:47].
The Role of Reasoning
A key missing element for many AI applications is models that can perform reasoning [00:27:25]. Altman believes that if core generalized reasoning can be figured out, connecting it to new problem domains will be “doable” and a “fast unlock” [00:31:49].
Impact of AI on Industries and Society
New Device Form Factors
Sam Altman is highly interested in “great new form factors of computing” enabled by technological advancements [00:15:47]. While the iPhone is considered “the greatest piece of technology Humanity has ever made,” he anticipates a shift beyond current devices [00:16:07]. Voice interaction, despite current latency issues, is seen as a hint to the “next thing” in computing [00:17:34]. Computer vision, combined with voice, allowing AI to understand its surroundings, is another powerful multimodal direction [00:18:13].
Altman envisions an “always on, super low friction” AI assistant that “just kind of knows what I want” and has context to help throughout the day [00:19:21]. He prefers the model of AI as a “great senior employee” rather than an “extension of myself” or an “alter ego” [00:20:25]. This assistant would be an “always available, always great, super capable assistant executive agent” that can reason and even push back [00:20:48].
App Interaction and User Interfaces
Altman is interested in designing a world that is “equally usable by humans and by AIs” [00:23:23]. This could involve apps exposing APIs to AI assistants or users watching the AI interact with an app and providing feedback [00:22:55]. While voice interaction is powerful, he believes visual user interfaces will remain important for many tasks [00:24:02].
Exciting Applications
Sam Altman is particularly excited about:
- AI Tutors: The potential for AI to fundamentally reinvent how people learn [00:25:48].
- Coding Tools: Tools like Devin are seen as a “super cool vision of the future” [00:26:20].
- Healthcare: Believes healthcare should be “pretty transformed” by AI [00:26:26].
- Scientific Discovery: Most excited about AI enabling “faster and better scientific discovery” [00:26:34].
Intellectual Property and Fair Use
The conversation around AI and content creation is complex [00:33:10]. OpenAI has been engaging in licensing deals with entities like the Financial Times (FT) [00:33:06]. Altman distinguishes between generalized human knowledge (like math theorems) and art, especially when a system generates art in the style or likeness of another artist [00:34:50]. He believes the debate will increasingly shift from training data to “what happens at inference time” as training data becomes less valuable [00:35:36]. For instance, if a model generates a song in the style of Taylor Swift, even without being trained on her songs, questions arise about whether this should be allowed and how the artist should be compensated [00:36:08]. OpenAI has currently chosen not to develop music models due to the complexities of these questions [00:38:27], and their DALL-E model prevents users from generating images of specific copyrighted characters like Darth Vader [00:40:24].
Regulating AI
Sam Altman expresses concern about “regulatory overreach” and the idea of states creating their own AI regulations [00:42:17]. He advocates for an international agency that would oversee the most powerful AI systems, similar to global oversight for nuclear weapons or synthetic biology [00:43:06]. The purpose of this would be to ensure “reasonable safety testing” for systems capable of causing “significant global harm,” such as those that could recursively self-improve or autonomously design bioweapons [00:42:58]. He suggests that regulation could apply to models trained on computers costing more than 100 billion, to avoid burdening startups [00:44:00].
Altman believes that current proposed legislation, particularly in California, is problematic because it would require government agencies to audit proprietary code and model weights [00:45:31]. He argues that such laws would quickly become outdated due to the rapid pace of AI development [00:46:18]. Instead, he favors safety testing on the outputs of models, akin to how airplanes are certified, rather than scrutinizing their internal workings [00:47:03]. He emphasizes that current models like GPT-4 do not pose a “material threat” regarding catastrophic risks [00:49:29].
Challenges and Opportunities
Jobs and Universal Basic Income (UBI)
Sam Altman began considering universal basic income (UBI) around 2016, concurrently with taking AI seriously [00:50:22]. The theory was that the magnitude of change AI might bring to jobs and the economy warranted exploring new societal arrangements [00:50:31]. He believes direct financial aid to people can be a more effective way to eliminate poverty than traditional government policies [00:51:01].
However, given the current developments in AI, Altman wonders if a “Universal Basic Compute” model might be more fitting for the future than UBI [00:52:03]. In this model, everyone would receive a “slice of GPT-7 compute” that they could use, resell, or donate for purposes like cancer research, effectively owning a “productivity slice” [00:52:09].
Organizational Approach
OpenAI operates with a highly organized effort, not primarily to prevent edge cases, but because its systems are complex and “concentrating bets are so important” [01:03:00]. This approach contrasts with the “move fast, break things” ethos of some startups or the distributed teams of other research labs [01:03:01]. For OpenAI, putting the “whole company” to work on projects like GPT-4 proved effective, even if initially “unimaginable” for an AI research lab [01:03:19].