From: jimruttshow8596
The field of artificial intelligence encompasses a broad spectrum of systems, which can generally be categorized into narrow AI and Artificial General Intelligence (AGI) [00:02:29]. AGI is a term coined by Ben Goertzel, a leading authority on the subject [00:00:40] [00:00:48].
Defining AGI and Narrow AI
AGI is an imprecise and informal term referring to computer systems that can perform tasks considered intelligent when done by humans, especially those they were not specifically programmed or trained for [00:01:48] [00:02:10].
In contrast, narrow AI refers to hardware and software designed to do highly particular things based on specific programming or data-driven training for those tasks [00:02:31] [00:02:40]. While humans can perform specific tasks, they also possess the ability to generalize and adapt to new, loosely connected domains, a capability largely absent in narrow AI [00:02:47].
Key Distinctions
The fundamental difference lies in generalization:
- Generalization Beyond Training Data Humans can make leaps into new domains without explicit programming or extensive training for every permutation [00:02:51] [00:03:05]. Narrow AI struggles when faced with situations too different from its training data [00:04:41].
- Understanding vs. Pattern Recognition Narrow AI, particularly deep neural Nets, often operates as clever lookup tables, recognizing surface-level patterns without building an underlying model of reality [00:16:26] [00:17:47] [00:20:00]. AGI would need to form concise abstractions of experience to generalize effectively [00:21:55].
- Data Requirements Narrow AI often requires truly massive amounts of data (billions of examples or character sets) through reinforcement learning or statistical pattern extraction [00:26:20]. Human generalization, by contrast, can occur with very small data sets or even “one-shot learning” [00:23:08] [00:24:47] [00:25:55].
Examples and Limitations of Narrow AI
- AlphaFold is an impressive example of narrow AI, predicting protein folding based on training data. However, it struggles with “floppy proteins” or novel molecular structures, requiring manual retraining or algorithmic changes [00:03:51] [00:04:06] [00:04:51].
- Self-driving cars are a complex narrow AI problem. They face challenges with generalization due to the variety of “weird things” that happen on the road, such as complex left-turn scenarios, which are not fully captured by existing training data [00:07:13] [00:07:38].
- Chatbots like GPT-3 and Jasper.ai [00:14:45] are highly sophisticated at generating language by recognizing sequences of words [00:18:00]. However, they demonstrate a lack of true understanding or modeling of reality. For instance, a Generative AI chatbot might suggest using a “table saw” to cut a table to fit through a small door, misinterpreting the tool’s actual function despite having relevant information in its training data [00:18:23] [00:19:55].
- DALL-E 2 and similar graphics programs are good at recombining elements from existing images but do not innovate in the way human artists like Matisse or Picasso did [00:29:01].
Challenges for AGI: Why Current Deep Neural Networks Fall Short
Most of the AI world’s attention is absorbed by Deep neural Nets (DNNs) and other machine learning algorithms [00:13:07]. However, many experts, including Ben Goertzel and Gary Marcus, argue that current deep neural Nets are fundamentally unsuited for creating human-level AGI [00:13:10] [00:13:50] [00:14:23].
Deep neural Nets excel at recognizing massive amounts of highly particular patterns within large datasets but struggle to generalize to domains that do not exhibit those specific patterns [00:20:59]. They primarily deal with “shallow patterns” rather than building robust, abstract models of reality [00:17:47] [00:21:47]. This contrasts sharply with human intelligence, which can make “creative imaginative leaps” based on limited experience [00:27:18] [00:27:25].
While some believe DNNs can be “beefed up” to yield AGI, others contend they are heading in the wrong direction [00:12:07] [00:15:02]. Goertzel holds a moderate view, seeing DNNs as potentially significant components of an AGI architecture, but missing many key aspects required for human-level intelligence [00:15:38] [00:16:14].
Viable Paths to AGI
Ben Goertzel proposes three main viable paths for achieving “true” AGI:
1. Cognitive Level Approach (Hybrid Neural-Symbolic Evolutionary Metagraph-based AGI)
This approach, exemplified by Goertzel’s OpenCog project (currently OpenCog Hyperon), seeks to emulate the high-level functions of the human mind using sophisticated computer science algorithms [00:33:34] [00:34:04]. It involves identifying distinct cognitive functions (perception, action, planning, memory, social reasoning) and developing algorithms for each, then integrating them into a coherent architecture [00:35:06] [00:36:18]. A core component is a large, distributed knowledge hypergraph (called “AtomSpace”) where different AI algorithms operate on common knowledge, allowing for “cognitive synergy” [00:37:53].
This approach utilizes advanced forms of logic (fuzzy, probabilistic, paraconsistent) to represent knowledge and manage uncertainty, moving beyond the limitations of “Good Old-Fashioned AI” (GOFAI) which relied on explicit, hand-coded, crisp knowledge [00:43:53] [00:44:24] [00:44:45]. Evolutionary aspects are inherent in the system’s dynamics, where importance values act as fitness, and logical reasoning performs selection and “crossover” [00:47:48].
2. Brain Level Approach (Large-scale Non-Linear Dynamical Brain Simulation)
This path involves simulating the human brain at a more biologically realistic level, focusing on the non-linear dynamics of neurons, glia, and other cellular processes [00:51:25]. Unlike current deep neural Nets, which are simplified abstractions, this approach models more complex neuron behaviors like chaotic attractors and sub-threshold charge spreading [00:55:51] [00:56:09].
A major challenge for this approach is the lack of precise measuring instruments to fully understand brain activity across large cortical areas [00:52:48] [00:53:28]. Despite this, Goertzel believes there’s significant potential, especially with new learning mechanisms that could enable more biologically realistic neural networks to learn structured semantic representations and achieve better generalization [00:59:53] [01:00:00] [01:11:41].
3. Chemistry Level Approach (Massively Distributed AI-optimized Artificial Chemistry Simulation)
Inspired by artificial life and biochemistry, this approach involves simulating artificial chemistries where “little programs” or codelets interact, catalyze, and produce new programs in complex reaction chains [01:15:51] [01:18:31]. The idea is to evolve the underlying “chemistry” or developmental machinery to find more expressive representations for intelligence, potentially leading to more efficient paths than natural evolution [01:20:21] [01:21:07].
A crucial enhancement to this approach is the integration of an “AI observer system” [01:32:51]. This system would use machine learning algorithms and probabilistic reasoning to study the evolving chemical soup, identify promising “Vats,” and direct the “chemical evolution” by generating new elements based on successful patterns, making it a hybrid approach [01:29:10] [01:30:12].
Hardware Considerations
A significant challenge for brain-level and chemistry-level AGI approaches is the inherently parallel nature of biological systems compared to the largely serial Von Neumann architecture of most current computers [01:01:34] [01:33:51]. While GPUs provided a leap for deep neural Nets by enabling parallel matrix multiplications [01:02:07], more sophisticated parallel hardware is needed for full brain simulations or complex artificial chemistries [01:02:23] [01:04:44].
There is a growing trend towards specialized chip architectures optimized for different AI algorithms beyond just DNNs [01:03:50]. This could include chips optimized for specific neuron models or for graph/hypergraph pattern matching [01:04:27] [01:05:15].
Socioeconomic and Funding Landscape
The current AI industry primarily focuses on narrow AI applications because they offer more immediate and predictable commercial value [00:27:47] [00:29:47]. This focus on maximizing known metrics and repeating well-understood operations means there is less motivation for risky, long-term AGI research [00:30:06].
Despite increasing acceptance that AGI is feasible, it is often seen as decades away [00:31:33]. This perspective, influenced by financial discount rates, discourages investment in AGI research [00:32:06]. There’s a call for a broader portfolio of bets in AGI research, backed by significant funding comparable to government defense budgets [01:40:02].
The development of AGI could be accelerated through government funding initiatives, similar to those in other scientific fields, or through a cultural shift towards more open-source and citizen science approaches [01:45:31] [01:47:01]. However, it’s acknowledged that AGI by nature is imaginative, creative, and unpredictable, which may not align with the short-term financial interests of big tech [01:49:09].
Some express concern about the risks of summoning the demon (as Elon Musk puts it) [01:44:36], but others argue that beneficial AGI is necessary to address humanity’s self-destructive tendencies [01:44:51]. It is hoped that a breakthrough in AGI could trigger an upsurge in both government and grassroots interest and funding [01:48:29].