From: jimruttshow8596

Ben Goertzel, a leading figure in the pursuit of Artificial General Intelligence (AGI), coined the term “AGI” approximately 15 years ago [00:02:51]. He is the leader of the OpenCog open-source AGI software framework [00:00:43] and CEO of SingularityNET, a distributed network for creating, sharing, and monetizing AI services [00:00:49].

Defining AGI and Distinguishing from Narrow AI

In the mid-20th century, the informal goal of the AI field was to achieve human-like intelligence [00:01:18]. However, over decades, researchers found it possible to create software and hardware systems that performed specific intelligent tasks, but in ways very different from humans and within narrowly defined contexts [00:01:33]. For example, a chess program could play like a grandmaster but couldn’t play Scrabble without reprogramming [00:01:54].

This led to the “narrow AI revolution,” where systems excel at particular tasks but cannot generalize their intelligent function beyond their specific, narrow contexts [00:02:23]. AGI, by contrast, refers to AI capable of achieving intelligence with at least the same generality of contexts as humans [00:03:05]. Concepts like transfer learning and lifelong learning are closely related, as they involve transferring knowledge across qualitatively different domains [00:03:13]. While humans are not “maximally generally intelligent” (e.g., struggling with high-dimensional spaces), they are far more general than current commercial AI systems [00:03:37].

Timeline for Human-Level AGI

Estimates for achieving human-level AGI vary, but Goertzel’s stock answer is five to thirty years from now [00:04:59]. A substantial plurality, possibly a majority, of AI researchers believe it will arrive within the next century [00:06:16]. A small minority believes digital computers cannot achieve human-level AGI due to assumptions about the human brain being a quantum computer [00:05:24].

Approaches to AGI Development

Goertzel broadly categorizes AGI development approaches into two main types:

  1. Uploads/Emulations: This involves directly scanning and emulating a human brain [00:06:56]. Goertzel considers this “just an idea” currently, scientifically feasible in theory but lacking the necessary brain scanning or reconstructive technology for direct work [00:07:11]. While advances in supporting technologies (brain-like hardware, accurate scanning) could bring incremental benefits for other areas like understanding the human mind or diagnosing diseases, it requires a “radical breakthrough” in imaging or extrapolation [00:09:51].
  2. Software Approaches: This involves creating AGI through software, either broadly brain-inspired (like deep neural nets) or more mathematically and cognitively inspired (like OpenCog) [00:08:08]. This is where active, concrete research projects are focused [00:08:24]. Goertzel favors “heterogeneous approaches” that leverage existing hardware and knowledge, including insights from brain function, in an opportunistic way [00:13:00].

OpenCog’s Core Design

OpenCog is a software-based approach to AGI, distinct from mainstream deep learning [00:14:10].

“OpenCog… then the singularity network each of these reflects different aspects of what we were trying to do in web mind so web mind was really a bunch of agents which was sort of heterogeneous that we’re supposed to cooperate to form an emergently intelligence system now OpenCog we tried to control things a lot more” [00:16:13]

OpenCog’s history dates back to the mid-90s with the “Webmind” project, inspired by Marvin Minsky’s “society of mind” concept, but with a greater focus on emergence and self-organizing complex systems [00:14:27]. After Webmind’s commercial failure, Goertzel started building the “Novamente Cognition Engine,” much of which became OpenCog [00:16:00].

OpenCog’s architecture includes:

  • Atomspace: A knowledge graph that is a weighted, labeled hypergraph with specific node and link types, and values like truth values and attention values [00:16:31].
  • Multiple AI Algorithms: These algorithms dynamically rewrite the Atomspace and assist each other. Key examples include:
    • Probabilistic Logic Networks (PLN): A probabilistic logic engine for reasoning [00:17:00].
    • MOSES: A probabilistic evolutionary program learning algorithm that learns Atomspace subnetworks representing programs [00:17:07].
    • Economic Attention Networks: Propagates attention values through the distributed network of nodes [00:17:20].
    • Deep Neural Networks: Used for recognizing perceptual patterns and creating nodes in the knowledge graph from deep neural network layers [00:17:26].

Cognitive Synergy

The core concept in OpenCog is cognitive synergy [00:17:45]. This refers to the process where different AI algorithms cooperate to help each other when one gets stuck or makes slow progress [00:17:51]. For example, if a reasoning engine is stuck, evolutionary learning might introduce creative ideas, or perception could introduce sensory-level metaphors [00:18:12]. This cooperation is concurrent, with algorithms acting on the same dynamic knowledge graph, rather than through clean API interfaces of modular systems [00:18:48]. The algorithms are “uncertainty savvy” and exchange probabilities [00:19:29].

This bi-directional, multi-level processing is a key advantage of OpenCog. For instance, high-level clues can flow back from higher mind levels to the perceptual system to aid in object identification, a feature not typically seen in current deep learning projects [00:20:04].

“I think this sort of neural symbolic approach to AI is gonna be very big three or four years from now because deep neural Nets in their current form we’re gonna run out of steam” [00:23:35]

Goertzel predicts a convergence between symbolic logic systems and neural networks, with more powerful logic engines interacting with knowledge graphs and deep neural nets [00:26:00].

Language Understanding

Language understanding is considered critical for the AGI project, second only to meta-reasoning (reasoning about reasoning) [00:27:04]. OpenCog is working on hybridizing its symbolic language understanding with deep neural nets.

OpenCog’s approach to language understanding involves:

  1. Syntax Parsing: Using a combination of symbolic pattern recognition and deep neural nets to automatically learn grammar from large text corpora [00:27:32].
  2. Semantic Interpretation: Mapping grammatical parses of sentences into logical expressions within OpenCog’s native logic representation, supplemented by links to images, episodic memories, and sounds [00:27:52].
  3. Pragmatics: Mapping semantics into a broader context, treated as a problem of association learning and reasoning [00:29:13].

OpenCog focuses on unsupervised language acquisition, aiming for a system that automatically learns dependency grammar [00:29:49]. Progress is being made, with the system improving its parsing capabilities [00:30:07]. Interestingly, even a small amount of “partial parse information” from supervised data (like mappings of word links to semantic relations) can significantly improve unsupervised learning results [00:30:50]. This reflects how humans learn, combining linguistic productions with non-linguistic environments [00:32:53].

Robotics and Embodiment in AGI

Robotics provides an “amazingly detailed simulation for free” – the universe itself [00:37:25]. While working with robots presents practical challenges (e.g., stuff breaks, battery life), the main issue for AGI is the current limitation of robots in performing what an AGI needs, such as moving freely in everyday human worlds, gathering multi-sensory input robustly, and manipulating objects like a toddler [00:44:06].

Although components exist (e.g., Boston Dynamics for movement, Hanson Robotics for expression, iCub for coordination), an “artificial toddler” robot integrating all these parts is not yet funded or realized [00:45:20]. Goertzel believes it’s years, not decades, away from being resolved, primarily an integration challenge [00:46:39].

While AGI doesn’t strictly need a robot body (a “superhuman supermind living on the internet” is conceivable [00:47:04]), a human-like body is valuable because many aspects of the human mind are attuned to it. Embodiment helps in understanding human values, culture, and psychology [00:47:22].

SingularityNET: A Decentralized AI Ecosystem

SingularityNET is a distributed network that allows anyone to create, share, and monetize AI services at scale [00:00:49]. It reflects Goertzel’s long-held ideas from the Webmind project (late 1990s) about a distributed network of autonomous agents cooperating to manifest emergent intelligence [00:49:29]. Unlike OpenCog, which tightly controls AI algorithms on a common knowledge store, SingularityNET is a “society of minds” where diverse AI agents can interact via APIs without needing to understand each other’s internal workings [00:50:01].

Key features of SingularityNET include:

  • Economy of Mind: A payment system where AIs pay each other for work or get paid by external agents [00:50:21]. This economic aspect contributes to emergent AI by facilitating value assessment and credit assignment [00:50:30].
  • Decentralized Infrastructure: Uses blockchain technology as plumbing to enable AI agents to interact without a central controller, promoting a participatory, democratic interaction [00:51:53].

Importance of Decentralized AI

Goertzel emphasizes the multi-fold importance of a decentralized and open approach to AI:

  • Enabling Good: It can allow AI to do more good in the world than a centralized, hegemonic approach [00:53:45]. Current centralized AIs are predominantly focused on “selling, spying, killing, and gambling” [00:54:10] (advertising, surveillance, weapons, financial prediction) [00:54:13]. A decentralized approach could foster AIs focused on “educating, curing diseases, doing science, helping old people, creating art” [00:54:43].
  • Future AGI Values: If narrow AIs evolve into AGIs, decentralized development increases the likelihood that these AGIs will pursue “compassionate and aesthetically creative” goals [00:54:55].
  • Countering Concentration: SingularityNET aims to be a counter-trend to the increasing concentration of deep AI progress in the hands of a few large corporations [00:53:19]. This concentration is driven by powerful network effects (accumulation of money, data, processing power) that favor early success in widely deployed narrow AI tasks [00:57:07]. SingularityNET seeks to leverage similar two-sided platform network effects (AI supply from developers, AI demand from product developers and end users) to scale a decentralized network [00:58:28]. The ambition is to have a major impact like Linux did, even if it doesn’t entirely displace big tech [00:59:18].

Go-to-Market Strategy for SingularityNET

SingularityNET focuses on building the demand side of its market first, while training supply internally [01:00:49]:

  • Singularity Studio: A for-profit company aimed at building commercial products (initially in FinTech, then IoT, health tech) on top of the SingularityNET platform [01:01:41]. Licensing fees for these products will convert fiat currency into AGIX tokens to drive the network’s market [01:02:36].
  • SingularityNET X-lab Accelerator: Recruits community projects to build software products for niche markets, leveraging AI on SingularityNET. These projects contribute to both demand and supply [01:03:07].

The goal is to achieve significant utilization of the AGIX token, creating a “utility token with actual utility,” which is rare in the blockchain space [01:04:39]. This will attract more AI developers to the platform due to the combination of financial incentives and the cool, democratizing vision [01:05:09].

Complex Self-Organizing Systems and Emergence

Goertzel’s approach to AGI is deeply rooted in complex nonlinear dynamics and emergence, setting it apart from mainstream AI [01:06:26]. He argues that current hierarchical neural networks, while important, miss crucial aspects of intelligence like:

  • Evolutionary Learning: Present in the brain (e.g., sensory-motor Darwinism) [01:07:06].
  • Nonlinear Dynamics and Emergence: Including strange attractors, which are key to how the brain synchronizes and coordinates its parts [01:07:15].
  • Autopoiesis (Self-Creation): The self-building and self-reconstruction of systems, evident in biology and the immune system [01:08:04].

“If you leave out ecology slash other places in evolution and you have only hierarchical pattern recognition you’re leaving out a whole lot of what makes the even mind interesting like creativity is evolution and you know the self and the will and all these and you know the conscious focus of attention which is binding together different parts of the mind into a perceived impractical unity this is all about strange attractors emerging in the brain building you know autopoietic systems of activity pattern so you’re leaving out all this like you do in modern deep learning systems well you’re leaving out a of a lot about what makes the human minds interesting” [01:09:25]

Goertzel views AGI ultimately as a “self-organizing complex adaptive dynamical system” [01:11:57]. This “open-ended intelligence” might stretch our current notion of intelligence, possessing more generality than humans but not necessarily maximizing simple reward functions [01:12:29]. The nature of consciousness in such a system is an open question; it may not have the unified human-like consciousness tied to a single body but could involve a complex, diffuse pattern of “proto-conscious sparks” across a distributed network [01:14:12].

Goertzel’s background as a panpsychist means he takes the concept of qualia seriously, believing elementary qualia are associated with every entity and can organize into collective system-level qualia differently based on the system’s organization [01:20:53]. He suggests that human-like consciousness is one specific variety of experience associated with systems organized like a human [01:21:13].