From: jimruttshow8596
The development of Artificial General Intelligence (AGI) aims to create computer systems that can perform tasks considered intelligent when done by humans, especially those they were not specifically programmed or trained for [00:02:04]. Ben Goertzel, who coined the term AGI [00:00:51], believes that current mainstream AI, primarily deep neural networks (DNNs) and machine learning (ML) algorithms, are fundamentally unsuited for achieving human-level AGI [00:13:10].
Limitations of Current AI Architectures
Current AI approaches, largely dominated by DNNs, excel at narrow tasks based on extensive data-driven training [00:02:35]. Examples like AlphaFold, which predicts protein folding based on training data, struggle with variations outside their specific training sets, such as “floppy proteins” or entirely new classes of molecules [00:04:06]. Humans, by contrast, can improvise and generalize to loosely connected or new domains [00:02:55].
The core issue is that DNNs primarily recognize “shallow patterns” in data, acting like sophisticated lookup tables [00:17:47]. They don’t build a model of the underlying reality [00:20:00]. For example, a DNN might suggest using a “table saw” to fit a table through a door, assuming it’s a saw for tables, despite having documentation in its training data that explains its true function [00:18:27]. This reliance on vast data and processing power for particular patterns limits their ability to generalize to domains without that specific data [00:21:01]. The ability to generalize to different domains relies on finding concise abstractions of experience [00:21:55].
This focus on narrow AI is driven by commercial viability; businesses often need AI that can repeat well-understood operations to maximize metrics, rather than systems that are creative and unpredictable [00:27:50].
Three Viable Paths to AGI
Ben Goertzel’s essay, “Three Viable Paths to True AGI,” outlines alternative approaches in AI research. While he believes current DNNs can be significant components of an AGI system, they are incomplete [00:15:41].
1. Cognitive Level Approach: Hybrid Neural-Symbolic Evolutionary Metagraph Based AGI
This approach, exemplified by the OpenCog project and its new version, OpenCog Hyperon [00:33:55], is inspired by how the human mind functions at a high level [00:33:34]. It involves identifying various cognitive functions like perception, action, planning, working memory, and long-term memory [00:35:08]. The goal is to implement each of these functions using the best available computer science algorithms and then integrate them into a coherent architecture [00:35:51].
A key aspect is “cognitive synergy,” where neural subnetworks carrying out distinct cognitive functions transparently help each other at a deep level [00:36:52]. OpenCog uses a large, distributed knowledge graph (hypergraph or metagraph) where different AI algorithms interact on this common knowledge base [00:37:57].
Addressing the “good old-fashioned AI” (GOFAI) critique, Goertzel argues that his approach differs by:
- Using fuzzy, probabilistic, paraconsistent intuitionistic logic, allowing for uncertainty and contradictions [00:44:20].
- Not relying on hand-coding common sense knowledge, but rather on learning [00:44:45].
- Integrating evolutionary learning and other AI ideas alongside logic [00:46:11].
Evolutionary aspects are present implicitly through attention-driven premise selection and uncertain logical reasoning, where importance acts as a fitness value [00:48:29]. Explicit genetic algorithms are also used for procedure learning and creativity, like evolving new logical predicates [00:49:33].
2. Brain Level Approach: Large-Scale Non-linear Dynamical Brain Simulation
This path involves direct simulation of the brain’s complex non-linear dynamics [00:52:05]. Unlike simplified DNNs, this aims for biological realism, considering various cell types (neurons, glia, astrocytes) and processes like cellular and charge diffusion, and even potential “wet quantum biology” [00:54:11].
A major challenge is the lack of adequate measuring instruments to fully understand brain processes, particularly abstraction formation [00:53:06]. Early efforts like the Blue Brain Project aimed in this direction, and work by Gerald Edelman and Eugene Izhikevich explored chaos theory-based neuron models for holistic brain simulation [00:55:06].
Recent developments include Alex Ororbia’s predictive coding-based learning mechanism, which could replace backpropagation in DNNs [00:59:02]. This method allows for more biologically realistic neurons (e.g., Izhikevich neurons with sub-threshold spreading of activation) and potentially better generalization, as it can learn structured semantic representations that interface more cleanly with logic-based systems like OpenCog [01:00:00].
Hardware poses another challenge, as the inherently parallel nature of the brain is poorly matched by current Von Neumann (serial) computing architectures [01:01:34]. While GPUs offered parallelization for DNNs, more sophisticated parallel hardware is needed for full brain simulations [01:02:05]. Specialized chips optimized for Izhikevich neurons or graph pattern matching (like the one Goertzel is working on) are being developed [01:04:27]. The viability of custom hardware development is increasing, potentially leading to AGI boards with various specialized chips within three to five years [01:07:25].
3. Chemistry Level Approach: Massively Distributed AI Optimized Artificial Chemistry Simulation
Inspired by artificial life and biochemistry, this approach explores the emergence of intelligence from simulated chemical reactions [01:16:32]. Pioneering work includes Walter Fontana’s “algorithmic chemistry,” where abstract programs act on each other to produce new programs in complex reaction chains [01:18:43].
The idea is to go beyond merely simulating biological evolution and instead evolve the underlying “chemistry” or gene expression machinery itself, to find more expressive representations for intelligence [01:20:19]. This involves making artificial life models more fine-grained, leading to artificial biochemistry [01:18:29].
A significant hurdle is the immense computational resources required for simulating realistic chemistry [01:26:24]. An abstracted approach, such as algorithmic chemistry using a modern programming language like Meta, might circumvent some of these computational costs [01:27:01].
An innovative approach is “directed chemical evolution,” where machine learning algorithms observe the evolving chemical soup, identify promising “vats,” and then generate new chemical configurations based on successful ones [01:29:08]. This could be implemented in a decentralized manner, with millions of users running virtual algorithmic chemistry soups on their machines [01:32:13].
Funding and Future Outlook
Currently, AGI research, especially in these less mainstream areas, is significantly underfunded compared to narrow AI or other global expenditures [01:25:17]. This is partly due to the long-term, uncertain nature of AGI development, which doesn’t offer quick commercial returns [01:29:27]. There’s also a cultural tendency among newer AI researchers to focus on short-term results [01:15:06].
Goertzel suggests that a breakthrough in AGI could catalyze both government funding and a grassroots cultural shift, similar to the rise of open-source software [01:47:01]. While big tech companies may continue to prioritize controllable AI for financial return, a more open and collaborative research environment could accelerate AGI development [01:48:51]. The first viable path to AGI may be a hybrid approach, capable of integrating ideas from other paradigms, such as biologically realistic neural nets for perception or algorithmic chemistry for creative idea generation [01:38:40].