From: jimruttshow8596

Introduction to AGI

AGI is an imprecise and informal term referring to computer systems capable of performing tasks that are considered intelligent when humans do them, particularly those they were not specifically programmed or trained for [01:52:00]. This contrasts with narrow AI, which excels at highly particular tasks based on programming or data-driven training [02:35:00]. Humans can generalize and improvise in new domains, unlike narrow AI systems like AlphaFold, which struggles with protein types outside its specific training data [02:47:00].

The concept of computers being generally intelligent is gaining mainstream acceptance, though practical achievement remains distant [05:46:00]. There is no current robot or AI that could perform a simple, everyday task requiring general intelligence, such as making coffee in a random kitchen [06:27:00]. This highlights the “AGI-hard” problems, which demand a level of generalization beyond current capabilities [06:56:00].

Critique of Current Generative AI and Deep Neural Nets (DNNs)

Ben Goertzel, who coined the term Artificial General Intelligence, believes that Deep Neural Networks (DNNs) and other machine learning algorithms, which currently dominate AI research, are fundamentally unsuited for achieving human-level AGI [13:04:00].

He argues that while DNNs are useful components within an AGI architecture, they lack many key aspects necessary for human-level intelligence [15:38:00]. DNNs operate largely as clever lookup tables, recording and indexing vast amounts of data to supply responses based on relevant historical information [16:27:00]. This approach focuses on “shallow patterns” rather than building a conceptual model of the underlying reality [17:47:00].

An example of this limitation is a DNN suggesting a “table saw” to fit a large table through a small door, assuming it’s a saw for tables, rather than a saw on a table [18:23:00]. This illustrates a knowledge representation issue where information is stored as contextualized particulars without abstract understanding [21:29:00]. Unlike humans, who can generalize from minimal examples (one-shot or few-shot learning), current DNNs rely on massive datasets and do not intrinsically form concise abstractions of experience [21:50:00].

The current AI industry’s focus on DNNs is driven by their immediate commercial value, as they excel at optimizing well-defined metrics and repeating operations predictably [27:40:00]. This focus steers resources away from AGI research, which requires more imaginative, creative, and unpredictable outcomes [28:32:00].

Three Viable Paths to True AGI

Ben Goertzel outlines three main approaches to achieving human-level AGI:

1. Cognitive Level Approach: Hybrid Neural-Symbolic Evolutionary Metagraph-based AGI

This approach, exemplified by Goertzel’s OpenCog project, aims to emulate the human mind’s high-level functions using advanced computer science algorithms, without strictly mimicking biological details [33:30:00]. It’s akin to designing an airplane by observing birds but not replicating feather-flapping [34:10:00].

Key aspects include:

  • Modular Functions: Designing effective computer science algorithms for distinct cognitive functions like perception, action, planning, working memory, and long-term memory [35:48:00].
  • Cognitive Synergy: Integrating these algorithms so that semi-discrete functions can transparently interact and help each other out [36:36:00].
  • Knowledge Graph (Metagraph): Centering the system on a large, distributed knowledge graph (hypergraph or metagraph) where different AI algorithms act on common knowledge [37:57:00]. OpenCog Hyperon is the new version of this system, incorporating new mathematical approaches to unify learning and reasoning algorithms [39:10:00].
  • Distinction from “Good Old-Fashioned AI” (GOFAI): Unlike GOFAI, this approach:
    • Uses advanced fuzzy, probabilistic, and paraconsistent logic to handle uncertainty and contradictions, rather than crisp logic [44:20:20].
    • Does not rely on hand-coding common sense knowledge, instead focusing on learning from data [44:45:00].
    • Integrates learning deeply, even allowing for logical theorem proving to be used for unsupervised learning on low-level sensory data [45:00:00].
  • Role of Evolution: Evolution is implicitly present in the system’s distributed parallel processes, where “fitness values” (importance in the knowledge graph) drive selection and “logical inference” acts as crossover/mutation [47:17:00]. Explicit genetic algorithms are also used for procedure learning and creativity, such as evolving new logical predicates [49:26:00].

2. Brain Level Approach: Large-Scale Non-Linear Dynamical Brain Simulation

This approach involves simulating the brain’s complex, non-linear dynamics at a detailed level. Goertzel notes that current DNNs are not true brain simulations [51:31:00].

Challenges and Opportunities:

  • Measurement Limitations: A key barrier is the lack of instruments to sufficiently measure time series of neural activity across large cortical areas to reverse-engineer brain processes, particularly concerning abstraction formation [52:52:00].
  • Sophisticated Neuron Models: Work by Gerald Edelman and Eugene Izhikevich on chaos theory-based neuron models, more biologically realistic than those in modern DNNs, shows promise for understanding how disparate neurons bind together [55:31:00].
  • Predictive Coding-based Learning: Alex Ororbia’s work on a “back propagation killer” for deep neural networks, using predictive coding, offers a new learning method that is more biologically realistic and could potentially lead to better generalization by allowing for more realistic neuron models and glial cells [57:41:00]. This could enable neural nets to automatically learn structured semantic representations that interface cleanly with logic-based systems like OpenCog [01:00:00].
  • Hardware Bottleneck: Detailed brain simulations are a terrible fit for traditional Von Neumann (serial) computing architectures [01:01:34]. While GPUs offer some parallelization for simpler DNNs, more complex brain models require inherently parallel hardware [01:02:01].
    • Specialized Chips: The next 3-5 years are expected to see the emergence of specialized chips for different AI algorithms, beyond just GPUs [01:03:50]. This includes chips optimized for Izhikevich neurons or Multi-Instruction, Multiple Data (MIMD) parallel processor-in-RAM architectures for graph pattern matching, as is being explored for OpenCog [01:05:15]. The declining cost of custom chip design makes “AGI boards” combining different specialized chips viable [01:07:25].

3. Chemistry Level Approach: Massively Distributed AI Optimized Artificial Chemistry Simulation

This approach draws inspiration from artificial life and aims to simulate a “chemical soup” where “molecules” (e.g., code snippets) react to produce new “molecules” in complex chains, leading to emergent intelligence [01:16:40].

Key concepts:

  • Artificial Biochemistry: Moving beyond simple genetic algorithms to simulate more fine-grained artificial organisms with artificial genomes and metabolisms, acknowledging the subtlety of biological processes like protein folding and epigenomics [01:17:10].
  • Algorithmic Chemistry: Abstracting the spirit of chemistry, as explored by Walter Fontana, using “list codelets” or programs that act on other programs to produce new ones [01:18:40].
  • Dual Evolution (Evo-Devo): The idea that if one could evolve the underlying chemistry (or its abstracted form) and its gene expression machinery, it might lead to a more expressive representation for intelligence, potentially finding an “easier way” than nature’s 3.7-billion-year process [01:20:19].
  • Compute Resources: Realistic chemistry simulations are extremely compute-intensive, even more so than brain simulations [01:26:21]. This makes abstracted algorithmic chemistry approaches more appealing for current resources [01:26:59].
  • AI-Directed Evolution: Incorporating machine learning or proto-AGI to study and direct the evolution of the chemical soup. This could involve an AI observer identifying promising “Vats” and regenerating less promising ones based on successful patterns [01:29:08]. This leads to a hybrid architecture where algorithmic chemistry is guided by pattern mining and probabilistic reasoning [01:30:38].
  • Decentralized Processing: Envisioning a future where millions of individuals contribute processing power to run virtual algorithmic chemistry simulations, with analysis and refresh performed by a decentralized AI platform like SingularityNet [01:31:53].
  • Hardware Parallels: While physical chemistry is inherently parallel, current computing is not. However, creative explorations into massively parallel, “lifelike” computing infrastructures (e.g., nanoscale continuous variable cellular automaton lattices for molecular computing) could offer suitable substrates [01:35:12].

Future Directions and Challenges

All three approaches are interesting and deserve significantly more attention and funding than they currently receive [01:25:01]. Ben Goertzel emphasizes the need for society to diversify its AGI research portfolio beyond the current mainstream focus on DNNs [01:38:00].

Funding for AGI research is critically under-resourced compared to its potential impact. Large-scale funding, perhaps hundreds of billions of dollars, could massively accelerate AGI R&D, similar to how the NIH has transformed fields of biology and medicine [01:40:52]. Such an investment is comparable to a fraction of a nation’s defense budget over several years [01:42:42].

Beyond government funding, a significant cultural shift is needed, akin to the rise of open-source software and citizen science [01:47:01]. As more people recognize AGI as a viable goal within their lifetimes and acknowledge that large governments and tech companies may not pursue the most creative paths, a grassroots surge in AGI R&D could occur [01:48:02]. A breakthrough that makes AGI’s arrival undeniable could trigger both increased government funding and widespread grassroots attention [01:48:31].

Goertzel’s own bet remains on the cognitive level, hybrid approach as the most likely path to achieve AGI first. However, he stresses that this hybrid system can integrate ideas from other paradigms, such as biologically realistic neural nets for perception or algorithmic chemistry for creative idea generation [01:38:40].