From: jimruttshow8596

Ben Goertzel is recognized as one of the world’s leading authorities on Artificial General Intelligence (AGI), a term he is credited with coining [00:00:36]. He is also the driving force behind the OpenCog project, an AGI open-source software initiative, and SingularityNET, a decentralized network for AI services [00:00:48].

The pace of development in the AI space is rapid, with new advancements constantly emerging [00:01:06]. Goertzel likens this acceleration to the early days of personal computers in the late 1970s and early 1980s, but happening “10 times faster” [00:01:23]. He predicts a 60/40 chance of AGI emerging within the next five years [00:23:23].

LLMs vs. AGI: A Nuanced Distinction

Goertzel’s core thesis, as outlined in his paper “Generative AI versus AGI: The Cognitive Strengths and Weaknesses of Modern LLMs,” asserts that large language models (LLMs) in their current Transformer-based form will not lead to full human-level AGI [00:03:50]. However, he acknowledges that LLMs can perform many amazing and useful functions, potentially even passing the Turing test [00:05:01]. Crucially, he views LLMs as valuable components of systems that can achieve AGI [00:05:10].

He draws a fine-grained distinction between different architectural approaches:

  • OpenAI’s approach: Pursuing an AGI architecture where LLMs serve as the “integration hub” for a mixture of experts, including other non-LLM systems like DALL-E or Wolfram Alpha [00:05:22].
  • OpenCog Hyperon approach: Utilizes an “AtomSpace,” a weighted labeled metagraph, as the central hub. LLMs and other specialized AI systems (like DALL-E) exist on the periphery, feeding into and interacting with this core metagraph [00:05:55].

The fundamental question, according to Goertzel, is whether a hybrid system should have LLMs as the hub, or something else (like AtomSpace) as the hub with LLMs in a supporting role [00:06:22]. This distinction is often oversimplified in the AGI field, which tends to polarize into “LLM boosters versus LLM detractors” [00:08:22].

LLM Hallucinations

The problem of LLM hallucination, where the model generates factually incorrect or nonsensical information, is a well-known issue [00:09:32]. While LLMs have improved, Goertzel believes that hallucinations can be “pretty much fully solved” without major architectural advances [00:11:06]. Techniques exist to probe the network and indicate whether it’s hallucinating, allowing for filtering [00:11:16]. However, this “trick” doesn’t contribute to AGI, as human avoidance of hallucination relies on “reality discrimination function” and “reflective self-modeling” [00:12:12].

“The way we avoid hallucinations as humans is by a reality discrimination function in our brain like it’s by reflective self modeling and and understanding” [00:12:12]

Why Google’s Bard is “Lame”

Goertzel finds it “amazing how lame” Google’s Bard is compared to OpenAI’s GPT models or Anthropic’s Claude, despite Google inventing Transformer networks and having numerous brilliant AI experts [00:15:19]. He suggests that the difference lies in the intensive, iterative tuning of models [00:16:03]. OpenAI and Anthropic (founded by ex-OpenAI people) likely have developed specific “lore” and test suites for tuning their models that Google lacks [00:16:07].

Defining AGI and its Limitations

Defining AGI is complex, with no single agreed-upon conceptualization [00:21:33].

  • Algorithmic Information Theory Perspective: AGI is the ability to achieve a vast variety of computable goals in diverse computable environments [00:22:01]. This is formalized by Marcus Hutter and Shane Leg (co-founder of DeepMind) as a weighted average over all computable reward functions [00:22:15].
  • Weaver’s Theory of Open-Ended Intelligence: Focuses on complex self-organizing systems individuating (maintaining existence) and self-transforming (growing beyond current capabilities) [00:24:11].

When considering human-level general intelligence, the definition becomes more specialized due to human biological and evolutionary constraints [00:24:46]. Standard AI benchmarks like IQ tests are seen as “shitty” measures [00:25:34]. The Turing Test, while historically significant, is considered a “very crude way to encapsulate the notion of functionalism” and “nobody takes it too seriously anymore” for general intelligence [00:26:56].

Current LLM Weaknesses

Goertzel identifies two key limitations of current LLMs that differentiate them from human-level AGI:

  1. Complex Multi-step Reasoning: LLMs struggle with the kind of deep, sequential reasoning needed for original scientific research [00:30:04]. While they can “turn the crank” on advanced math (like fleshing out calculus for a novel derivative definition), they don’t originate the core ideas or identify the “interesting” paths [00:38:48].
  2. Original Artistic Creativity / Non-Banal Creativity: LLMs tend towards “banality” [00:14:14]. While they can produce output comparable to a journeyman artist (e.g., movie scripts or blues guitar solos), they don’t invent new genres or concepts akin to Thelonious Monk or Jimi Hendrix [00:29:00]. This is due to their “fundamentally derivative and imitative character” [00:33:18].

These limitations stem from the LLM architecture, which primarily recognizes “surface level patterns” in data, rather than learning deeper abstractions in the way humans do [00:32:33]. Human abstraction is guided by embodied agency, helping to anticipate novel situations and find creative solutions for survival and reproduction [00:42:25].

Alternative AI Architectures and OpenCog Hyperon

Goertzel believes that future AGI will emerge from architectures that go beyond current LLM limitations.

  • More Recurrence: Introducing more recurrence into neural networks, similar to LSTMs, could foster richer abstractions [00:46:46].
  • Alternative Training Methods: Replacing or supplementing backpropagation with methods like predictive coding [00:47:36] or evolutionary algorithms [00:51:11] could improve learning for complex, recurrent architectures.
  • Hybrid Architectures: Combining elements like AlphaZero (for planning/strategic thinking), neural knowledge graphs (like in Differential Neural Computing), and Transformers, potentially with more recurrence and novel training [00:48:17]. Google and DeepMind are well-positioned for such approaches [00:48:42].
  • Minimum Description Length Learning: Yoshua Bengio’s group explores neural nets explicitly trying to learn abstractions using minimum description length principles, coupled with Transformers [00:49:31].

OpenCog Hyperon

Goertzel’s primary focus is OpenCog Hyperon, which is built on a weighted labeled metagraph (a hypergraph where links can point to links or subgraphs, and are typed and weighted) [00:54:38]. This metagraph aims to represent diverse knowledge types (declarative, procedural, attentional, sensory) and cognitive operations (reinforcement learning, logical reasoning, pattern recognition) [00:55:02].

Key aspects of OpenCog Hyperon:

  • Meta-meta-programming language (Meta): Programs are represented as sub-metagraphs, allowing programs to act on, transform, and rewrite chunks of the same metagraph in which they exist [00:55:33]. This enables a high degree of self-modification and reflection [00:56:54].
  • Reflection-Oriented: Unlike LLMs, which are focused on predicting the next token, OpenCog Hyperon is designed to recognize patterns within its own mind, processes, and execution traces, and represent them internally [00:56:59].
  • Integration of Paradigms: The framework naturally integrates various historical AI paradigms (logical inference, evolutionary programming) and new ones (self-organizing rewrite rules) [00:57:55].
  • Scalability: The main challenge is scaling the infrastructure [01:00:43]. The team is building a pipeline to compile Meta language to highly efficient code (like Greg Meredith’s rang language) for multi-core CPUs and specialized hardware like associative processors (APUs) [01:02:05].

Goertzel believes that if OpenCog Hyperon reaches human-level AGI, its path to superhuman AI would be very short due to its inherent ability to rewrite its own code [00:59:15]. He also sees it as ideally suited for scientific discovery due to its logical reasoning capabilities and capacity for precise procedural description, and for true creativity through evolutionary programming [00:59:48].

“If we get to human level AGI with this abstract knowledge metagraph which is based on recognizing patterns in itself and reprogramming itself For Better or Worse like your path from a human level AI that can do this to a superhuman AI system is really short because the whole system is based on rewriting its own code” [00:59:15]

The AGI field is now a genuine “race,” with significant resources being invested by large companies [01:05:06]. Despite the rapid advancements in LLMs, Goertzel’s team has made good progress on OpenCog Hyperon, often ahead of schedule, attributing this partly to increased funding and improved tooling [01:05:06].