From: jimruttshow8596

OpenCog is an open-source Artificial General Intelligence (AGI) software framework and a leading effort in the pursuit of AI at and beyond human levels [00:43:00]. It is led by Ben Goertzel, a prominent figure in AGI research [00:19:40]. AGI refers to AI capable of achieving intelligence with the same generality of contexts that people can [00:33:57].

Philosophy and Origins

OpenCog’s history predates the formal project name, tracing back to the mid-90s when Ben Goertzel conceived of an AI as an agent system, a “society of mind” as described by Marvin Minsky, but with a stronger focus on emergence [01:13:03]. While Minsky viewed the human mind as a collection of AI agents interacting like people in a society, Goertzel emphasized the emergent level of dynamics and the emergence of overall structures in the network of agents as equally important to individual agent intelligence [01:14:10].

Initial efforts in the late 1990s involved a system called WebMind, envisioned as heterogeneous agents distributed across the internet, coordinating to yield emergent intelligence [01:14:10]. After WebMind’s commercial failure, the Novamente Cognition Engine was developed, much of which was later open-sourced into the OpenCog system [01:14:10].

OpenCog Architecture

OpenCog’s design focuses on controlled emergence within a carefully structured framework, unlike the more loosely coupled WebMind [01:14:10]. Its core components include:

  • Knowledge Graph (AtomSpace): A weighted, labeled hypergraph that contains particular types of nodes and links, with values such as truth and attention attached [01:14:10].
  • Multiple AI Algorithms: These algorithms dynamically rewrite and interact with the AtomSpace, often monitoring and assisting each other [01:14:10]. Examples include:
    • Probabilistic Logic Networks (PLN): A probabilistic logic engine described in a 2006 book [01:14:10].
    • MOSES: A probabilistic evolutionary program learning algorithm that learns AtomSpace sub-networks representing executive programs [01:14:10].
    • Economic Attention Networks (ECAN): Propagates attention values through the distributed network of nodes [01:14:10].
    • Deep Neural Networks: Used to recognize perceptual patterns or patterns in other data, creating nodes in the knowledge graph representing sub-networks or layers in the deep neural networks [01:14:10].

Cognitive Synergy

A core concept in OpenCog is cognitive synergy, which describes how different AI algorithms cooperate [01:14:10]. When one algorithm gets stuck or makes slow progress, other algorithms can understand its intermediate state and goals, then intervene to help make new progress [01:14:10]. For example, if a reasoning engine is stuck, evolutionary learning might introduce creative ideas, or perception could introduce sensory-level metaphors [01:14:10]. This bi-directional and concurrent cooperation on the same dynamic knowledge graph distinguishes it from modular systems with clean API interfaces [01:14:10].

Distinction from Narrow AI and Deep Learning

Ben Goertzel coined the term AGI about 15 years prior to the discussion to differentiate it from “narrow AI,” which excels at specific tasks in narrowly defined ways, often very differently from humans, and struggles to generalize beyond specific contexts [00:43:00]. While narrow AI has led to a “narrow AI revolution” with astounding varieties of intelligent systems [00:43:00], OpenCog aims for intelligence with generality of contexts similar to humans [00:43:00]. This goal necessitates capabilities like transfer learning and lifelong learning [00:43:00].

OpenCog employs a “neural symbolic approach,” combining deep neural networks for perceptual patterns with a symbolic cognitive engine (AtomSpace, PLN) [01:14:10]. This contrasts with many current deep learning projects that typically lack bidirectional processing and don’t deeply integrate symbolic knowledge [01:14:10].

Key Research Areas within OpenCog

Cognitively Informed Perception

OpenCog aims for “cognitively informed perception,” where high-level clues from the mind flow back to the perceptual system to disambiguate or interpret stimuli using background knowledge [01:14:10]. This is something current deep neural networks typically don’t attempt [01:14:10]. Experiments are underway in OpenCog to enable symbolic cognitive engines to interact in real-time with deep neural networks for perception [01:14:10].

Language Understanding

OpenCog views language understanding as critical for AGI [01:14:10]. Its approach involves:

  • Syntax Parsing: Combining symbolic pattern recognition and deep neural nets to automatically learn grammar from large text corpora [01:14:10].
  • Semantic Interpretation: Mapping grammatical parses of sentences into logical expressions within the AtomSpace, which can be linked to other modalities like images and episodic memories [01:14:10].
  • Pragmatics: Mapping semantics into broader contexts through association learning and reasoning [01:14:10].

Research focuses on unsupervised language acquisition, which aims to learn dependency grammar automatically [01:14:10]. There is also exploration into mixed approaches, such as seeding unsupervised learning with small amounts of supervised data, like partial parse information or captioned images, mimicking how humans learn language through a mixture of supervised and unsupervised cross-modal learning [01:14:10]. The “AGI preschool” idea suggests systems learning linguistic and non-linguistic patterns in the context of practical goal achievement in multimodal environments [01:14:10].

Meta-Reasoning

Goertzel identifies meta-reasoning (reasoning about reasoning) as the most critical area currently being worked on towards AGI [01:14:10].

Relation to Robotics and Embodiment

While AGI in principle doesn’t require a robot, an OpenCog system with a human-like body would be valuable [01:14:10]. Many aspects of the human mind, from hand-eye coordination to concepts of self and free will, are attuned to having a body that interacts with the physical world and experiences pain and perception [01:14:10]. Embodiment is seen as important for an AGI to understand human values, culture, and psychology and to empathize with humans [01:14:10]. Current robotics faces challenges in providing truly versatile “toddler robots” that can freely move and gather multi-sensory input in everyday human worlds [01:14:10].

SingularityNet

OpenCog is closely related to SingularityNet, a distributed network that allows anyone to create, share, and monetize AI services at scale [00:43:00]. SingularityNet embodies the idea of a “society of minds” where different AI agents, each doing AI in their own way, interact via APIs [01:14:10]. This network features a payment system where AIs can charge each other or external agents for services, creating an “economy of mind” that enables emergent AI and a viable commercial ecosystem [01:14:10]. The infrastructure uses blockchain to enable a self-organizing agent system without a central controller [01:14:10].

SingularityNet is positioned as a counter-trend to the increasing concentration of AI development in a few large corporations [01:14:10]. It aims to enable AI to do more good in the world and, if current narrow AIs evolve into AGIs, to ensure they develop compassionate and aesthetically creative goals [01:14:10]. Like Linux in the operating system world, SingularityNet aims to be a major force in decentralized AI, providing an open, democratically governed platform with a genuine market for AI services [01:14:10].

Broader Philosophical Context

Ben Goertzel’s approach to AI is rooted in complex self-organizing systems, emergence, chaos, and strange attractors [01:14:10]. He argues that mainstream AI, particularly deep learning, misses key aspects of how the brain works by focusing primarily on hierarchical pattern recognition and maximizing simply formulated reward functions [01:14:10].

He highlights two fundamental forces underlying intelligent systems:

  • Evolution: Creation of the new from the old, driving creativity [01:14:10].
  • Autopoiesis (Self-Creation/Self-Building): A form of complex nonlinear dynamics seen in biology where a system continually rebuilds and reconstructs itself, maintaining integrity in a changing environment [01:14:10]. This relates to ecology and concepts like neural Darwinism and cell assembly theory in the brain [01:14:10].

These nonlinear dynamics are crucial for aspects of mind like creativity, self, will, and conscious attention, which he believes are largely overlooked in modern deep learning [01:14:10].

Consciousness and Mind

Goertzel distinguishes between human-like consciousness and broader forms of awareness or experience [01:14:10]. He aligns with panpsychism, suggesting that “elementary qualia” (the subjective qualities of experience) are associated with every entity and can organize into collective system-level qualia depending on the system’s organization [01:14:10]. Human-like consciousness, with its unity, is seen as driven by the need to control a unified, mobile body to achieve goals like survival [01:14:10].

He acknowledges that a distributed, self-organizing dynamical system across the internet might have a “variety of consciousness” or a “conglomeration of proto-conscious sparks” that is far less unified than human consciousness [01:14:10]. This leads to the concept of “open-ended intelligence,” which may stretch traditional notions of intelligence and not be primarily about maximizing reward functions [01:14:10]. While the speaker and Goertzel have some differing views on the precise definition of consciousness (e.g., John Searle’s “Chinese Room” argument and the analogy to digestion), they agree that understanding human-like consciousness is a specific problem distinct from the broader nature of mind and awareness in complex systems [01:14:10].

Prospects for AGI

Ben Goertzel’s stock answer for when human-level AGI might appear is five to thirty years from the time of the discussion [00:43:00]. He notes that the mean and variance of such estimates within the AI community have significantly decreased over the past decade, with a substantial plurality now believing it will arrive within the next century [00:43:00]. He emphasizes that achieving human-level AGI might require substantially new approaches beyond incrementally improving current narrow AI systems [00:43:00].

He identifies two broad paths to AGI:

  1. Software Approaches: Loosely brain-inspired software (like deep neural nets) or more math and cognitive science-inspired software (like OpenCog) [00:43:00]. This path is currently the subject of concrete research projects [00:43:00].
  2. Brain Uploads/Emulations: Directly scanning and representing a human brain’s neural system (connectome) in a computer [00:43:00]. While scientifically feasible according to known physics, this is currently “just an idea” due to the lack of necessary brain scanning and reconstructive technology [00:43:00]. Incremental progress in brain-like hardware and accurate brain scanning would still yield valuable advancements for understanding the human mind and diagnosing diseases [00:43:00]. However, Goertzel believes that OpenCog’s heterogeneous and opportunistic approach to AGI, leveraging existing hardware and knowledge while drawing from brain science, is more practical [01:14:10]. He anticipates a convergence of symbolic (like OpenCog) and neural (like deep learning) approaches in the coming years [01:14:10].