From: redpointai
Peter Welinder, VP of Product and Partnerships at OpenAI, shared his insights on the potential timelines for achieving Artificial General Intelligence (AGI) and superintelligence, emphasizing a significant shift in the pace of AI advancement compared to previous decades [00:00:32].

AGI Timeline and Definition

OpenAI defines AGI as autonomous systems capable of performing economically valuable work at the level of humans [00:34:41]. Welinder believes there is a chance of reaching something resembling AGI before 2030 [00:34:50]. This prediction comes with caveats, acknowledging potential unknown factors or external events like a GPU chip shortage that could delay progress [00:34:59].

Reflecting on his 15 years in the field, Welinder notes a distinct difference in the current trajectory of AI development. Previously, the path to AGI felt uncertain, but now, with rapid innovation, “things seem to be moving in a kind of um almost like automatic fashion” [00:35:15]. Many long-time experts share this sentiment, indicating a significant change in the field over the last 15 to 20 years [00:35:48].

Superintelligence Outlook

Beyond AGI, the next step is superintelligence—models that are “really really smart” and capable of surpassing human intelligence [00:32:19]. This could involve abilities like thinking much faster, performing many more parallel experiments than a human, and solving complex global issues like climate change or cancer [00:36:13].

Welinder’s speculative bet is that early signs of superintelligence could emerge around 2030 [00:46:10]. However, he also acknowledges the possibility that even if AGI is achieved, its economic viability might be a bottleneck, potentially requiring another 5 to 10 years before it can be run cost-effectively [00:36:30].

Urgency for Safety Research

Welinder stresses the critical need for increased research into the implications and safety of superintelligence, noting a “surprisingly little amount of research” currently being done in this area outside of organizations like OpenAI [00:33:33]. He views superintelligence as a potential existential risk for humanity, highlighting the importance of developing technical alignment techniques and regulatory frameworks at a worldwide scale [00:33:57].

He emphasizes that discussions around superintelligence should be taken seriously and believes that the sooner “much more kind serious debates” occur, the better [00:33:51]. Welinder expresses optimism that humanity will collectively navigate these challenges, similar to how nuclear war has been avoided through a “level of self-preservation” [00:38:00]. OpenAI’s strategy involves gradually deploying models when stakes are low to learn about risks like misinformation and biases, ensuring that the necessary organizational processes and safety frameworks are in place for higher-stakes scenarios like superintelligence [00:38:42]. For example, OpenAI held back GPT-4 for nearly half a year to gain clarity on its potential downsides [00:39:40].