From: jimruttshow8596

Ben Goertzel, a leading authority on Artificial General Intelligence (AGI), is credited with coining the term “artificial general intelligence” [00:00:48]. He is also the instigator of the OpenCog project, an AGI open-source software project, and SingularityNET, a decentralized network for developing and deploying AI services [00:00:56].

Defining Artificial General Intelligence (AGI)

AGI is an imprecise and informal term that refers to computer systems capable of performing tasks considered intelligent when done by humans, including those they were not specifically programmed or trained for [00:01:52]. This contrasts with narrow AI, which excels at highly particular tasks based on programming or data-driven training [00:02:35].

A key distinction of AGI is its ability to “take a leap” into domains only loosely connected to previous experiences [00:02:51]. Humans exhibit this, for instance, in learning to use the internet without explicit genetic or curriculum-based programming, through improvisation and experimentation [00:02:57]. While humans are not infinitely generally intelligent (e.g., struggling with mazes in 977 dimensions), their generality is far superior to any current AI software [00:03:33].

Examples of AGI Hard Problems

  • AlphaFold Limitations: While impressive, AlphaFold predicts protein folding based on training data. It struggles with “floppy proteins” or new molecular classes (e.g., alien chemistry) because it cannot generalize beyond its training data without manual intervention or algorithm changes [00:04:06]. A human expert, given alien molecules, would likely enjoy the challenge and improvise [00:05:22].
  • Wozniak’s Ax Test: An AGI robot placed in a random kitchen should be able to attempt to make coffee [00:06:22]. No current robot or AI can solve this problem today [00:06:38].
  • Self-Driving Cars: It is unclear if achieving average human-level self-driving is AGI-hard [00:07:13]. The challenge lies in generalization to “weird things” that happen on the road, where current narrow AI training data is insufficient [00:07:38].
  • Turing Test: Passing a casual 10-minute conversation with an average person is likely achievable with current chatbot technology in a few years [00:09:00]. However, tricking an expert like Goertzel or Jim Rutt in a two-hour conversation is considered AGI-hard, requiring genuine human-level general intelligence [00:09:17].
  • Outlier Innovation: The creativity of individuals like Richard Feynman, Albert Einstein, Jimi Hendrix, or Henri Matisse involves significant leaps into the unknown, going beyond surface-level patterns in previous accomplishments. This level of innovation cannot be achieved by simply looking at patterns in existing data [01:10:38].

Criticism of Current AI Approaches (Deep Neural Networks)

Goertzel believes that deep neural networks (DNNs) and other machine learning algorithms, which dominate the AI world’s attention, are “fundamentally unsuited for the creation of human level AGI” [01:10:01]. While he views them as a significant component of an AGI architecture, he asserts they are missing many key aspects required for human-level intelligence [01:15:41].

He likens current DNNs to “very large lookup tables” that cleverly record and index what they have seen, using relevant historical data to supply responses [01:16:27]. Despite their “deep” label, these networks primarily identify “shallow patterns” in data [01:17:43]. For example, in natural language processing, they focus on sequences of words rather than building an underlying model of the conceived world [01:18:00]. This limitation is exemplified by a transformer neural net suggesting a “table saw” to fit a large table through a small door, assuming it’s a saw for tables, despite having carpentry manuals in its training data that explain its true function [01:18:23]. This indicates that these systems do not build models of reality underlying the text [02:00:00].

Current systems leverage vast amounts of data and processing power to recognize highly particular patterns and extrapolate from them [02:09:59]. This approach struggles to generalize to domains of reality that do not exhibit those specific patterns [02:17:17]. This is a “knowledge representation issue,” where knowledge is cataloged as contextualized particulars without abstraction [02:29:13]. The inability to form concise abstractions of experience directly hinders the ability to generalize to different domains [02:29:50].

Generative AI models like GPT-3 and DALL-E 2, while impressive, give the sense of being “astoundingly clever sets of statistical relationships” without true grounding [02:22:20]. This contrasts sharply with human learning, where a person can play only a few thousand war games across hundreds of titles, yet pull out broad generalizations applicable to new, different games, operating at a higher level of abstraction [02:29:58]. Humans (and even smart dogs) demonstrate “one-shot learning” by filling in knowledge gaps and improvising based on few clues [02:47:19].

The AI industry’s focus on DNNs is largely driven by commercial viability. These architectures excel at tasks that involve repeating well-understood operations to maximize defined metrics, such as making people click on web ads or obeying doctrine in military applications [02:47:40]. This allows for milking commercial value from applications that don’t require creative or imaginative AI [02:49:50].

Three Viable Paths to True AGI

Based on his essay, “Three Viable Paths to True AGI,” Goertzel outlines three promising directions for developing AGI:

1. Cognitive Level Approach: Hybrid Neural Symbolic, Evolutionary Metagraph Based AGI

This approach, exemplified by the OpenCog Hyperon system, aims to emulate the human mind’s high-level functions using advanced computer science algorithms, rather than attempting to replicate biology at a low level [03:30:30]. Similar to how airplanes were inspired by birds but didn’t replicate flapping wings, this method takes inspiration from natural intelligence at a higher abstraction level [03:46:58].

Key aspects include:

  • Modular Design: Identifying distinct cognitive functions (perception, action, planning, working memory, long-term memory, social reasoning) and developing effective computer science algorithms for each [03:50:58].
  • Cognitive Synergy: Ensuring these algorithms can interoperate deeply, with transparency into each other’s internal processing, rather than being isolated “black boxes” [03:57:12].
  • Distributed Knowledge Graph: Centering the system on a large distributed knowledge graph (hypergraph or metagraph) with typed, weighted nodes and links [03:57:50]. Various AI algorithms operate on this common graph [03:38:08].
  • Modernizing GOFAI: This approach addresses criticisms of “good old-fashioned AI” (GOFAI).
    • Logic: Uses more advanced, fuzzy, probabilistic, paraconsistent, and intuitionist logic, allowing for uncertainty and contradictions [04:27:50].
    • Learning: Not reliant on hand-coding common sense knowledge. It incorporates learning, including from low-level sensory data, using logical theorem provers or unsupervised learning [04:44:45].
  • Role of Evolution:
    • Implicit Evolution: In a distributed knowledge base like OpenCog’s Atomspace, economic attention allocation (spreading importance values) and preferential action of logical reasoning on important items can be mathematically described by population genetics [04:47:48]. The system inherently performs evolutionary learning without explicit genetic algorithms [04:52:00].
    • Explicit Genetic Algorithms: Used for procedure learning (e.g., learning program codelets) and creativity (e.g., evolving new logical predicates) [04:59:00].
    • Goertzel views evolution and autopoiesis (self-organization/reconstruction) as fundamental meta-dynamics underlying complex systems, akin to “being and becoming” [05:11:00].

2. Brain Level Approach: Large-Scale Non-linear Dynamical Brain Simulation

This path involves simulating the brain at a detailed biological level, which is distinct from current DNNs that use simplified neuron models [05:31:00].

  • Challenges in Computational Neuroscience:
    • Measurement Limitations: Current brain imaging instruments (PET, fMRI, MEG) cannot yet provide the necessary time-series data of neural activity across large swaths of cortex to reverse-engineer complex processes like abstraction formation [05:27:00].
    • Biological Complexity: Beyond neurons, the brain involves glia, astrocytes, cellular/charge diffusion, and potentially “wet quantum biology,” which are not fully understood [05:41:00].
    • Lack of Holistic Models: Most computational neuroscientists focus on modeling small brain subsystems rather than creating integrated, holistic brain models due to cost and complexity [05:52:00].
  • Promising Directions:
    • Alex Ororbia’s Work: Goertzel is collaborating with Alex Ororbia, who has developed a predictive coding-based learning mechanism for deep neural networks that appears to outperform backpropagation [05:46:00]. This method can work with more biologically realistic neuron models (e.g., Hodgkin-Huxley or chaotic neurons) and incorporate glia, which standard backpropagation cannot [05:51:00].
    • Structured Semantic Representations: The hypothesis is that replacing backpropagation with predictive coding in networks with biologically realistic neurons (like Izhikevich neurons with sub-threshold spreading of activation) could lead to better generalization and more compact neural networks that automatically learn structured semantic representations, allowing for cleaner interfacing with logic-based systems like OpenCog [05:57:00].
  • Hardware Challenges:
    • Parallel Computing: Brain simulations are inherently parallel, while most current computers are fundamentally serial (Von Neumann architecture) [01:01:34]. The success of current DNNs relied on “hijacking GPU processors” for parallelization [01:02:01].
    • Specialized Chips: Goertzel anticipates a proliferation of specialized AI chip architectures beyond GPUs in the next 3-5 years, optimized for different AI algorithms [01:03:50]. He is involved in designing a MIMD parallel processor-in-RAM architecture for OpenCog’s graph and hypergraph pattern matching, suitable for stable knowledge graphs [01:05:15]. The decreasing cost of designing new chips makes it viable to create AGI boards integrating different specialized chips (deep learning, Izhikevich neuron, hypervector, pattern matching) with fast interconnects [01:07:21].

3. Chemistry Level Approach: Massively Distributed AI Optimized Artificial Chemistry Simulation

This approach stems from Goertzel’s background in artificial life, which aims to build artificial organisms with simulated metabolisms and genomes within a simulated world [01:16:32]. The core idea is that evolution in biology is intricately linked with self-organization and complex, self-forcing dynamics within organisms and their environments, ultimately boiling down to biochemistry [01:18:00].

  • Abstracting Chemistry: Inspired by Walter Fontana’s “algorithmic chemistry,” this involves abstracting the spirit of chemistry by using “list codelets” (small programs) that act on other programs to produce new ones in complex chains of reactions, simulating a chemical soup [01:18:40].
    • The motivation is to explore if evolving an underlying “chemistry” (or gene expression machinery) could lead to a more expressive representation for producing intelligent phenotypes than natural evolution’s arbitrary chemistry [01:19:48].
    • This approach might be more amenable to play with, easier to understand, require less compute, and avoid the peculiarities of real chemistry [01:24:39].
    • OpenCog Hyperon’s new programming language, Meta (M-E-T-T-A), could facilitate this by enabling abstract and modern programming for algorithmic chemistry [01:27:40].
  • Realistic Chemistry Simulation: An alternative within this approach is to simulate real chemistry/biochemistry, as explored by Bruce Damer’s EvoGrid project, which uses grid computing to run computational chemistry simulators to solve the origin of life [01:22:49]. While intellectually fascinating, this requires immense compute resources [01:26:21].
  • AI-Optimized Artificial Chemistry: To address the compute challenge, Goertzel proposes a hybrid approach: using machine learning to study the evolving chemical soup [01:29:10]. For instance, in a simulation with 10,000 “vats of chemicals,” an AI could identify the most promising vats, kill the least promising, and refill them with mutations or crossovers from the best ones [01:29:20]. This forms a “directed chemical evolution” using machine learning and even proto-AGI to guide the process [01:30:41].
  • Decentralized Implementation: This approach lends itself to decentralized platforms like SingularityNET’s NuNet, where millions of people could run small virtual algorithmic chemistry simulations on their machines. An OpenCog system in the cloud could analyze the progress and refresh the “soups” periodically [01:31:53].
  • Parallelism Challenge: Like brain simulations, chemistry is an inherently parallel process. The current serial nature of most computing systems remains a barrier to fully realizing this approach, highlighting the need for massively parallel, “more lifelike” computing infrastructures [01:33:16].

The Need for Portfolio Diversification and Funding

Goertzel emphasizes that humanity needs to “open up its portfolio bets” in AGI research and invest more significantly in these less mainstream approaches [01:39:41]. While the emergence of AGI is now widely accepted to be decades away (e.g., 20-30 years), the lack of investment is due to financial discount rates and a focus on short-term returns [01:38:00].

  • Funding Priorities: A few hundred billion dollars could massively accelerate AGI R&D, enabling thousands of projects. This amount is trivial compared to government expenditures like defense budgets or stimulus packages [01:40:52].
  • Industry vs. Research: The AI industry prioritizes “fine-tuning narrow AIs” for incremental gains, as it leverages existing large datasets and provides predictable commercial value [02:50:00]. Pursuing AGI is seen as a longer-term, uncertain endeavor that doesn’t yield immediate “incremental goodies” [01:39:26].
  • Lack of Patience: Goertzel notes a cultural shift in younger researchers who are “addicted to running a learning algorithm a data set and getting a cool result right away” [01:50:00]. This discourages the sustained, long-term effort required for AGI research, which may not provide immediate feedback [01:53:00].
  • Potential Avenues for Funding:
    • Government Funding: While often conservative, entities like the US NIH and DARPA have shown success in transforming fields through research funding [01:54:00].
    • Cultural Shift/Citizen Science: A shift akin to the open-source software movement could see more people dedicating time to AGI R&D without government funding, especially as more individuals have disposable time and recognize the viability of AGI within their lifetimes [01:47:00].
  • The First Path: Goertzel maintains his primary bet on the cognitive level, hybrid approach (like OpenCog Hyperon) as the most likely path to AGI first [01:38:25]. This approach’s hybrid nature allows it to integrate ideas and modules from other paradigms (e.g., biologically realistic neural nets for perception, algorithmic chemistry for creativity) [01:43:50].

Goertzel concludes that while there will be many paths to AGI, humans may only pursue the first one, with subsequent paths explored by the AGI itself [01:38:00].