From: jimruttshow8596
What is AGI?
Artificial General Intelligence (AGI) refers to Artificial Intelligence (AI) at a fully human level and beyond [00:00:35]. The term “AGI” was coined by Ben Goertzel approximately 15 years ago to differentiate it from narrow AI [00:02:49].
AGI vs. Narrow AI
When the AI field began in the mid-20th century, the informal goal was to achieve intelligence of the same type as humans [00:01:18]. However, over subsequent decades, it was discovered that software and hardware systems could perform specific tasks that appeared intelligent, but did so in a very different and narrowly defined way compared to humans [00:01:33]. For example, a chess program could play as well as a grandmaster but couldn’t play Scrabble or checkers without reprogramming [00:01:54].
Narrow AIs excel at particular intelligence-seeming tasks within narrowly defined contexts, yet they operate differently from human intuition and struggle to generalize their intelligent functions beyond a very specific context [00:02:07]. In contrast, AGI aims for intelligence with at least the same generality of contexts that people can manage [00:03:05]. Concepts like “transfer learning” and “lifelong learning” are closely related to AGI, as achieving general intelligence requires the ability to transfer knowledge from one domain to a qualitatively different one [00:03:13].
While humans are very general compared to current commercial AI systems, they are not “maximally generally intelligent” [00:03:37]. For instance, humans would perform poorly in a 275-dimensional space, indicating limitations in generalizing beyond the dimensionality of our physical universe [00:03:56]. The research goal for AGI is to create AIs that are at least as generally intelligent as humans, and ultimately, even more so [00:04:19].
When to Expect Human-Level AGI
Estimates for achieving human-level AGI vary, but a common range is 5 to 30 years from now [00:04:59]. While some estimates range from 5-10 years to hundreds of years, very few serious AI researchers believe it will never happen [00:05:13]. A small minority suggest that a digital computer cannot achieve human-level general intelligence because the human brain might rely on quantum or quantum gravity computing [00:05:22]. However, setting aside this minority, a substantial plurality, perhaps a majority, of researchers believe AGI will arrive within the next century [00:06:16]. Over the last decade, the mean and variance of these estimates have decreased, with the trend moving towards more optimistic timelines [00:05:56].
Approaches to AGI
Brain Uploads/Emulations vs. Software Approaches
AGI approaches can broadly be divided into brain uploads/emulations and software-based methods [00:06:52].
- Brain Uploads/Emulations: Currently, achieving AGI via an upload or emulation of the human brain is primarily a theoretical concept, though it seems scientifically feasible according to known physics [00:07:07]. Direct research on this is not ongoing, but supporting technologies are being developed that could eventually enable accurate brain scanning [00:07:28]. The current limitations include a lack of technology to scan a living mind or reconstruct the dynamics of a dead, scanned brain [00:07:39]. This approach might be seen as “all-or-nothing” if the focus is only on full human brain emulation [00:08:52]. However, incremental advances in brain-like hardware/wetware and scanning could lead to valuable narrow AI applications (e.g., robot control, perception) and progress in understanding the human mind and diagnosing brain diseases [00:09:51]. It could also help in building animal-level AIs [00:10:21].
- Software Approaches: In contrast, creating AGI via software—whether loosely brain-inspired like deep neural networks or more mathematically and cognitively inspired like OpenCog—is the subject of concrete research projects today [00:08:08]. This path does not necessarily require a radical technological breakthrough beyond what currently exists, unlike brain emulation which needs significant advancements in imaging or dynamics extrapolation [00:11:11]. Software approaches allow for incremental benefits, which are already being observed [00:09:03].
While brain emulation serves as a “proof of principle” that a flying machine (brain) can be built from “molecules” (neurons), the best proof of principle is not always the best way to build something in practice [00:11:28]. Current hardware is very different from the brain, and software approaches can leverage existing hardware and knowledge (e.g., theorem proving, arithmetic, database lookup) where computers already outperform human brains [00:13:14].
OpenCog
OpenCog is an open-source AGI software framework led by Ben Goertzel, aiming for AGI development [00:00:43]. Its history dates back to the mid-1990s with the “Webmind” project, which envisioned a distributed network of autonomous agents cooperating to achieve emergent intelligence [00:15:39].
OpenCog aims for a more controlled and structured approach compared to Webmind [00:28:28]:
- Knowledge Graph (AtomSpace): It uses a weighted labeled hypergraph called the AtomSpace, with specific types of nodes and links, and values like truth and attention attached to them [00:16:31].
- Multiple AI Algorithms: Various AI algorithms operate on the AtomSpace, dynamically rewriting it and assisting each other [00:16:47]. These include:
- Probabilistic Logic Networks (PLN): A probabilistic logic engine [00:17:00].
- MOSES: A probabilistic evolutionary program learning algorithm that learns AtomSpace subnetworks [00:17:07].
- Economic Attention Networks (EAN): Propagates attention values through the distributed network [00:17:19].
- Deep Neural Networks (DNNs): Used to recognize perceptual patterns and create nodes in the knowledge graph representing their layers [00:17:26].
Cognitive Synergy
The concept of cognitive synergy in OpenCog refers to the process where different AI algorithms cooperate [00:17:45]. If one algorithm gets stuck or makes slow progress, others can understand its intermediate state and goals, then intervene to help it progress [00:17:51]. For example:
- Reasoning stuck on logical inference might be unblocked by evolutionary learning introducing new creative ideas or perception providing sensory metaphors [00:18:12].
- A deep neural network struggling with video recognition could refer to reasoning for analogy inference or evolutionary learning for brainstorming creative ideas [00:18:26].
This synergy is designed to be bidirectional and concurrent, with algorithms acting on the same dynamic knowledge graph in real-time, exchanging probabilities and probability distributions [00:18:48]. This contrasts with modular systems that use clean API interfaces between modules [00:18:52]. It resembles an 80s “blackboard system” but with a dynamic, in-RAM, weighted, labeled hypergraph and uncertainty-savvy AI algorithms [00:19:19].
High-level clues can flow back from higher cognitive levels to perceptual systems, assisting in disambiguation and interpretation, a feature not commonly seen in current deep learning projects but central to OpenCog’s structure [00:20:06]. This “neural-symbolic” approach, combining deep neural nets with symbolic cognitive engines and logic systems, is expected to become significant as current deep neural nets may run out of steam for problems requiring more abstraction [00:23:35].
Language Understanding
Language understanding is considered critical to the AGI project, alongside meta-reasoning (reasoning about reasoning) [00:27:04]. OpenCog is working on hybridizing symbolic pattern recognition with deep neural nets for syntax parsing, aiming to automatically learn grammar from large text corpora [00:27:32].
The process of language understanding involves:
- Syntax Parsing: Mapping grammatical parses of sentences into semantic representations [00:27:49].
- Semantic Interpretation: Learning mappings from sentence syntax to logical expressions (a core aspect of semantics), which can be ornamented with other data like images, episodic memories, or sounds [00:28:48].
- Pragmatics: Mapping semantics into broader contexts, treated as an association learning and reasoning problem [00:29:13].
OpenCog focuses on unsupervised language acquisition to learn dependency grammar, which is then used by tools like the Link Parser [00:29:49]. While not yet at the level of supervised learning-based grammar parsers, starting with a small amount of partial supervised data (e.g., mapping word collections into semantic relations) can significantly improve accuracy for unsupervised learning [00:30:28].
One approach to learning semantics is using captioned images, correlating what’s in the images (recognized by neural nets connected to OpenCog) with the syntax parses of their captions [00:31:36]. This combines supervised data (from captioned images) with unsupervised data (from uncaptioned sentences) [00:32:14], mirroring how humans learn language through a mix of observed correlations and inference [00:33:04]. Ultimately, semantics comes from pattern mining across linguistic productions and non-linguistic sensory environments [00:36:22].
Robotics and Embodiment
While robotics presents practical challenges (e.g., robots breaking, needing frequent recharges), it offers an “amazingly detailed simulation for free: the universe” [00:43:08]. The main issue is that current robots are limited in what they can do for AGI development [00:44:29]. The ideal AGI robot would behave like a toddler: moving freely in everyday human worlds, gathering multi-sensory input despite environmental challenges, and manipulating objects [00:44:36]. While all the individual robotic components exist (e.g., Boston Dynamics for movement, Hanson Robotics for expression, I-Cub for arm-hand coordination, SynTouch for fingertip sensitivity), integrating them into an “artificial toddler” has not yet been funded [00:45:20].
In principle, AGI does not require a robot; a superhuman mind could exist on the internet with sensors and actuators [00:47:02]. However, a human-like body for an AGI is valuable because many aspects of the human mind are attuned to such a body (e.g., eye-hand coordination, the narrative self, the illusion of free will, self-other relations) [00:47:13]. These experiences from interacting with the physical world help teach about object persistence and the self [00:47:50]. Embodiment is thus important for an AGI to understand human values, culture, and psychology [00:48:26].
SingularityNET
SingularityNET is a distributed network allowing anyone to create, share, and monetize AI services at scale [00:00:49]. It reflects early ideas of autonomous agents cooperating in a distributed network for emergent intelligence [00:49:29].
Key features and vision:
- Decentralized AI Economy: It enables a “society of minds” where AI agents can pay each other for services, or be paid by external agents [00:50:20]. This economic aspect aids in credit assignment and value assessment within the network [00:50:33].
- Blockchain Plumbing: Uses blockchain technology to facilitate interaction among AI agents without a central controller, promoting a heterogeneous and participatory control system [00:51:53].
- Marketplace for AI: It serves as a practical marketplace where anyone can contribute an AI agent, and it can be monetized [00:52:27].
- Counter-Trend to Centralization: SingularityNET aims to counteract the concentration of AI progress in the hands of a few large corporations [00:53:19]. A decentralized approach can enable AI to do more good in the world, especially if current narrow AIs evolve into AGI [00:53:40]. The current centralized AI ecosystem is largely focused on “selling, spying, killing, and gambling” (advertising, surveillance, weapons, finance) [00:54:07]. A decentralized network can foster AIs focused on education, disease research, science, elder care, and art [00:54:41].
- Network Effects: SingularityNET is a double-sided platform, like Uber or Airbnb, with a supply of AI developers and a demand from product developers and end-users [00:58:17]. This structure can leverage network effects for rapid growth once critical mass is achieved [00:58:56].
- Go-to-Market Strategy: The strategy focuses on building the demand side first, initially by internal AI developers [01:00:54]. This includes:
- Singularity Studio: A for-profit company building commercial products (e.g., in FinTech for risk management) that use the SingularityNET platform on the back end [01:01:40]. Licensing fees from these products drive demand for the AGI token within the network [01:02:33].
- SingularityNET X-Lab Accelerator: Recruits community projects to build software products using AI on SingularityNET, focusing on niche markets [01:03:07].
AGI as Complex Self-Organizing Systems
Ben Goertzel’s approach emphasizes complex nonlinear dynamics and emergence, distinguishing it from mainstream AI, which often focuses on hierarchical neural nets and probabilistic pattern recognition [01:06:26]. He argues that two key forces underlie intelligent systems, akin to “being” and “becoming” in philosophy:
- Evolution: Creates the new from the old (e.g., creativity in the mind) [01:07:49].
- Autopoiesis: Self-creation or self-building, a type of complex nonlinear dynamics seen in biology where a system continuously rebuilds and reconstructs itself (e.g., self and will in the mind, the immune system) [01:08:04].
Leaving out these aspects (ecology/autopoiesis and evolution) and focusing only on hierarchical pattern recognition, as modern deep learning often does, misses much of what makes the human mind interesting, including creativity and the binding of conscious attention [01:10:01]. The current business models of large companies drive AI development towards easily measurable metrics (KPIs) and simple reward functions, which are less applicable to concepts like evolution or ecological self-reconstruction [01:10:40].
Goertzel prefers to think of AGI as “self-organizing complex adaptive dynamical systems” [01:11:57]. An AGI emerging from the internet or a conglomeration of narrow AI systems might be an “open-ended intelligence” that stretches our notion of intelligence, not primarily focused on maximizing simple reward functions [01:12:21].
Mind and Consciousness in AGI
When considering AGI, questions arise about whether it possesses a mind or consciousness. If every physical system has a “spark of proto-consciousness” (as per panpsychist views like Chalmers’), then human consciousness could be seen as emerging from these sparks when associated with specific information processing systems, such as those controlling a localized, embodied organism [01:13:32].
A global, distributed, complex self-organizing dynamical system like an internet-based AGI, without a central controller, might have a variety of consciousness that is much less unified than human-like consciousness [01:14:17]. The “unity” of human consciousness may stem from the unified nature of our body, which must control itself to survive, leading to unified goals [01:14:51]. An AGI that can replace its parts at will and whose components pursue overlapping goals dynamically might have a more diffuse, less unified conglomeration of “proto-conscious sparks” [01:15:07].
The concept of consciousness in AI is complex. Human consciousness is very specific, resulting from the organization of memories across various time frames and structures like a global workspace [01:16:25]. Analogies, like digestion being a process of multiple body parts, suggest that consciousness is also a process that emerges from specific architectural details, rather than being an independent entity [01:16:45]. Therefore, machine “consciousness” might be analogous to human consciousness but cannot be identical due to design differences [01:17:20]. The external functional description of a system (like input/output from a “Chinese room”) may not fully reveal its internal conscious state [01:21:57].