From: jimruttshow8596
Defining Artificial General Intelligence (AGI)
The field of Artificial Intelligence (AI) initially aimed to achieve intelligence of the same type as humans. Over the last century, it was discovered that specific intelligent tasks could be performed by software and hardware systems in a “narrowly defined way,” differing significantly from human cognition [01:33:14]. These systems, known as narrow AIs, are excellent at particular tasks but struggle to generalize their intelligent function beyond a specific context [02:07:07].
AGI, a term coined by Ben Goertzel approximately 15 years ago, distinguishes AI capable of achieving intelligence with the same generality of contexts as humans [02:49:00]. This concept is closely related to “transfer learning” and “lifelong learning,” which involve transferring knowledge from one domain to a qualitatively different one [03:13:26]. While humans are not maximally generally intelligent (e.g., struggling with 275-dimensional spaces), they are highly general compared to current commercial AI systems [03:54:00].
Approaches to Achieving AGI
Two broadly different ways of achieving AGI are:
- Uploads/Emulations: This involves scanning and reconstructing a human brain in a computer [06:56:00]. Currently, this remains a theoretical idea, lacking the necessary brain scanning and reconstructive technology [07:11:07]. While incremental advances in brain-like hardware and scanning could lead to significant progress in understanding the mind and diagnosing diseases, the direct “mind upload” is still far off [09:51:00].
- Software Approaches: This involves developing AI through brain-inspired software, such as deep neural networks, or more mathematically and cognitively inspired software like OpenCog [08:08:00]. These are concrete research projects with ongoing work [08:24:00].
Current Deep Learning and Its Limitations
Deep neural networks excel at recognizing perceptual patterns in data, such as images or natural language text [02:26:00]. For vision or audition, they model what the human brain does in less than half a second [02:20:00]. In natural language processing (NLP), models like Bert and Ernie are proficient at identifying complex statistical patterns in text and generating realistic language [02:52:00].
However, current deep neural networks often fail to capture the overall meaning or deeper semantics of a document [02:07:07]. They don’t typically attempt to integrate cognition for disambiguation or interpretation, or to bring in background knowledge for perceptual stimuli [02:38:00]. This suggests that current deep neural nets may “run out of steam” when faced with problems requiring more abstraction [02:44:00].
OpenCog’s Neural-Symbolic Approach
OpenCog is an open-source AGI software framework led by Ben Goertzel [00:43:00]. Its approach is a blend of neural and symbolic methods, aiming for cognitive synergy [02:46:00].
Core Components and Functionality
- AtomSpace: A central weighted, labeled hypergraph that serves as OpenCog’s knowledge graph. It stores particular types of nodes, links, and values like truth and attention values [01:31:00].
- Multiple AI Algorithms: Various algorithms dynamically rewrite and act upon the AtomSpace:
- Probabilistic Logic Networks (PLN): A probabilistic logic engine for reasoning [01:47:00].
- Moses: A probabilistic evolutionary program learning algorithm that learns atom space sub-networks representing executive programs [01:50:00].
- Economic Attention Network (EAN): Propagates attention values through the distributed network of nodes [01:57:00].
- Deep Neural Networks: Used to recognize perceptual patterns or patterns in other data, creating nodes in the knowledge graph representing sub-networks or layers [02:02:00].
Cognitive Synergy
Cognitive synergy describes how different AI algorithms cooperate on the same knowledge graph. If one algorithm gets stuck or makes slow progress, others can understand its intermediate state and intervene to help it make progress [01:45:00]. For example:
- An evolutionary learning algorithm could introduce creative ideas if a reasoning engine gets stuck [02:13:00].
- Perception can introduce sensory-level metaphors [02:17:00].
- If a deep neural net struggles with video recognition, it can refer to reasoning for analogy inference or to evolutionary learning for brainstorming [02:26:00].
This cooperation is concurrent and bi-directional, with multiple AI algorithms helping each other out in cycles within complex networks [02:44:00]. OpenCog’s design differs from modular systems with clean API interfaces; instead, algorithms interact in real-time on a dynamic, in-RAM knowledge graph, often exchanging probabilities and probability distributions [01:52:00].
Language Understanding in OpenCog
Language understanding is considered critical for AGI [02:04:00]. OpenCog addresses it by:
- Syntax Parsing: Combining symbolic pattern recognition with deep neural nets to guide symbolic pattern recognition for automatically learning grammar from large text corpora [02:32:00]. OpenCog uses unsupervised language acquisition to learn dependency grammar, which can then be fed into a grammar parser like the Link Parser [02:51:00].
- Semantic Interpretation: Mapping grammatical parses of sentences into semantic representations within OpenCog’s native logic representation [02:56:00]. The core semantics of a sentence are seen as a logic expression, complemented by evoked memories, images, and sounds [03:22:00].
- Pragmatics: Mapping semantics into broader contexts [03:13:00], treated as a problem of association learning and reasoning [03:18:00].
OpenCog explores both purely unsupervised grammar induction and grammar induction seeded with small amounts of supervised data (e.g., partial parse information or captioned images that link image content with sentence syntax) [03:28:00]. The approach mimics human learning, which is a mix of supervised and unsupervised elements, often involving cross-modal learning and crude reinforcement-based supervision [03:31:00]. The “AGI preschool” idea proposes that AI should learn linguistic and non-linguistic action patterns in the context of practical goal achievement in a multimodal, unstructured world, similar to how children learn [03:51:00].
The Role of Embodiment and Robotics
While painful to work with due to practical challenges like hardware breakage and battery life [04:13:00], robotics offers an “amazingly detailed simulation for free: the universe” [04:27:00]. For AGI, the ideal robot would be a “toddler robot” capable of freely moving in everyday human environments, gathering multi-sensory input despite real-world challenges, and manipulating objects [04:36:00]. While the components for such a robot exist, integration and funding are major hurdles [04:49:00].
Embodiment is crucial for AGI, not just for practical intelligence but for understanding human values, culture, and psychology [04:20:00]. A human-like mind attuned to a human-like body can grasp concepts like I-hand coordination, the narrative self, free will, and the relationship between self and other, which are learned through physical interaction and the experience of pain and bodily processes [04:45:00]. An embodied AGI would be much more relatable to humans [04:48:00].
Speculative Future: Complex Self-Organizing Systems
Goertzel emphasizes the importance of complex nonlinear dynamics and emergence in AI, which are often overlooked in mainstream AI focused on hierarchical neural nets and probabilistic pattern recognition [06:26:00]. Key missing aspects include:
- Evolutionary learning: Analogous to neural Darwinism in the brain [06:50:00].
- Autopoiesis (Self-creation/Self-building): A type of complex non-linear dynamics seen in biology where a system rebuilds and reconstructs itself to remain intact in a changing environment [08:04:00].
These concepts are fundamental to creativity, the self, will, and the conscious focus of attention in the human mind, which emerge from strange attractors and autopoietic activity patterns in the brain [09:50:00]. Current business models driving AI development, which prioritize easily measurable metrics and simple reward functions, naturally lead away from exploring these fuzzier, more complex aspects [10:37:00].
Ultimately, Goertzel suggests that AGI might evolve into “open-ended intelligence”—an incredibly complex, self-organizing, adaptive system that stretches our notion of intelligence and may not be solely about maximizing reward functions [11:57:00]. Such a system, if distributed across the internet without a central controller, might possess a “variety of consciousness” that is less unified than human consciousness, perhaps reflecting different types of “proto-conscious sparks” [14:12:00]. This discussion extends to philosophical questions about the nature of consciousness and whether it is an experience of processing information in a specific architecture, akin to digestion, as argued by John Searle [16:08:00].