From: jimruttshow8596

Complex self-organizing systems and emergence are considered foundational for understanding and achieving Artificial General Intelligence (AGI) that parallels human cognition [01:06:05]. This perspective contrasts with mainstream AI approaches that often focus on hierarchical pattern recognition [01:06:40].

Core Concepts

Emergent Intelligence

The idea of emergent intelligence suggests that higher-level intelligent behaviors can arise from the interactions of simpler components within a system, rather than being explicitly programmed [01:12:21]. This concept was central to early AGI research, including the Webmind project [01:15:49].

Self-Organizing Systems

A self-organizing complex system is one where internal interactions lead to macroscopic order or structure without external guidance [01:06:05]. This is seen as a key aspect of how the mind works [01:15:03]. The human brain itself is described as a massively parallel system where each part independently contributes to its overall function [01:51:40].

Nonlinear Dynamics and Strange Attractors

Nonlinear dynamics and strange attractors are crucial for understanding how the brain synchronizes and coordinates its many parts [01:07:15]. These concepts are linked to phenomena like creativity, the self, free will, and the conscious focus of attention, which involve the emergence of autopoietic systems of activity patterns [01:09:50].

Two Fundamental Forces: Evolution and Autopoiesis

Intelligence systems are underpinned by two main forces, described as “being” and “becoming” in philosophical terms, or “evolution” and “autopoiesis” in dynamic terms [01:07:51].

Evolution

Evolution involves the creation of new things from old components [01:08:24]. In the context of the brain, this is seen in concepts like neural Darwinism [01:08:50].

Autopoiesis

Autopoiesis, a term coined by Maturana and Varela, refers to a system’s ability for “self-creation” or “self-building” [01:08:04]. This is a specific kind of complex nonlinear dynamic observed in biology, where a system continuously rebuilds and reconstructs itself to remain intact within a changing environment [01:08:14]. Examples include the immune system’s network theory and the brain’s cell assembly theory [01:09:01].

Application in AGI Systems

OpenCog’s Cognitive Synergy

OpenCog’s architecture is designed to foster cognitive synergy, where multiple AI algorithms cooperate on the same dynamic knowledge graph, the “AtomSpace” [01:17:40]. This cooperation means if one algorithm gets stuck or makes slow progress, others can understand its intermediate state and intervene to help, making new progress [01:17:51]. This process is intended to be bi-directional and concurrent, with algorithms helping each other in complex networks [02:09:50].

Examples of cognitive synergy within OpenCog:

  • A probabilistic logic engine (PLN) can be unstuck by an evolutionary program learning algorithm (Moses) introducing creative ideas [01:17:00], or by perception introducing sensory-level metaphors [01:18:21].
  • A deep neural network stuck on video recognition can refer reasoning to perform analogy inference or evolutionary learning to brainstorm creative ideas [01:22:50].

This approach emphasizes the interplay between different AI algorithms, unlike modular systems with clean API interfaces [01:18:52]. The focus on graphs and probabilities also distinguishes it from older “blackboard systems” [01:19:40].

SingularityNET’s Distributed Network

SingularityNET extends the concept of self-organizing systems to a distributed network of autonomous AI agents [01:50:01]. These agents, each with their own AI methods, interact via APIs and can optionally share state [01:50:08]. This “society of minds” operates with a payment system where AIs pay each other for work, or are paid by external agents [01:50:21]. This economic aspect aids in assigning credit and assessing value within the network, further contributing to emergent AI behaviors in a more loosely coupled network [01:50:30].

The blockchain infrastructure allows these AI agents to interact without a central controller, instead relying on heterogeneous, participatory control [01:51:53]. This decentralized, open approach is deemed crucial for allowing AI to contribute positively to the world, beyond the “selling, spying, killing, and gambling” applications often driven by centralized corporations [01:54:00].

Broader Implications

Ignoring emergence and nonlinear dynamics in AI development, as seen in current deep learning systems, means missing crucial aspects that make the human mind interesting, such as creativity and the self [01:10:01]. This focus on maximizing simply formulated reward functions is often tied to business models and key performance indicators (KPIs) [01:10:40].

Ultimately, the goal is to develop “open-ended intelligence” [01:12:11]self-organizing complex adaptive dynamical systems that may stretch the current understanding of intelligence [01:12:34]. Such systems might not have human-like consciousness or maximize simple reward functions, but would be incredibly complex and adaptive [01:12:40].