From: jimruttshow8596
The field of artificial intelligence (AI) broadly encompasses two main categories: narrow AI and artificial general intelligence (AGI) [01:09:00]. While the initial informal goal of AI research in the mid-20th century was to create intelligence comparable to human intelligence [01:25:00], subsequent decades saw the rise of narrow AI systems.
Narrow AI
Narrow AI refers to software and hardware systems designed to perform particular tasks that appear intelligent when executed by humans, but they do so in a very different manner [01:33:00]. A key characteristic of narrow AI is its limited scope; these systems are developed for specific, narrowly defined problems and lack the ability to generalize their intelligent functions beyond their programmed or trained contexts [02:07:07].
Examples of narrow AI include:
- A program capable of playing chess at a grandmaster level, but unable to play Scrabble or checkers without significant reprogramming [01:52:00].
- The “narrow AI revolution” has led to a wide variety of systems performing highly intelligent-seeming tasks within specific domains [02:20:00].
Current deep neural networks, while powerful for tasks like perceptual pattern recognition in vision or audition, are largely considered forms of narrow AI. They excel at recognizing complex statistical patterns in data but do not inherently grasp overall meaning or deeper semantics, as seen in natural language processing [22:51:00]. These systems tend to run out of “steam” when problems require more abstraction, which current deep neural networks are not designed to do [24:10:00].
Artificial General Intelligence (AGI)
The term “AGI” was introduced by Ben Goertzel approximately 15 years ago to differentiate AI capable of achieving intelligence with the same generality of contexts that people can [02:49:00]. AGI aims for intelligence at a fully human level and beyond [00:35:00]. Concepts like “transfer learning” and “lifelong learning” are closely related to AGI, as they involve the ability to transfer knowledge from one domain to qualitatively different domains [03:13:00].
While humans are very general compared to existing narrow AI systems in commercial use [04:14:00], they are not “maximally generally intelligent.” For example, humans struggle with tasks in 275 dimensions, demonstrating a limitation in generalizing beyond the dimensionality of the physical universe they inhabit [03:54:00]. Therefore, a research goal for AGI is to create AI that is at least as generally intelligent as humans, and ultimately, more generally intelligent [04:19:00].
Significance and Outlook
The emergence of AGI is considered highly significant [01:14:00]. Estimates for achieving human-level AGI typically range from 5 to 30 years from now, with a substantial plurality or majority of AI researchers believing it will arrive within the next century [04:59:00]. A small minority of researchers believe digital computers can never achieve human-level general intelligence, positing that the human brain relies on non-Turing computing (e.g., quantum computing) [05:24:00].
There are two broad approaches to achieving AGI:
- Uploads/Emulations: Directly scanning and representing a human brain’s neural system (connectome) in a computer [06:37:00]. Currently, this is more of a theoretical idea than a practical research direction, lacking the necessary brain scanning and reconstructive technology [07:11:00]. Incremental progress in brain-like hardware and scanning could, however, lead to valuable advancements in other areas like understanding the human mind or diagnosing diseases [09:51:00]. While theoretically feasible, this approach might not be the most efficient or fastest way to build intelligent systems [11:40:00].
- Software Approaches: Developing AI through software, which can be either broadly brain-inspired (like current deep neural networks) or more math and cognitive science-inspired (like OpenCog) [08:08:00]. This approach is the subject of concrete research projects and offers incremental benefits [08:26:00].
Challenges and Approaches in AGI Development
One key challenge in AGI development is achieving real language understanding [30:02:00]. OpenCog, a project led by Ben Goertzel, pursues a different approach to AGI development beyond mainstream methods by combining symbolic AI with deep learning. OpenCog utilizes a knowledge graph called the AtomSpace, where multiple AI algorithms (such as probabilistic logic networks, evolutionary program learning, and economic attention networks) cooperate dynamically on the same knowledge graph [16:28:00]. This approach emphasizes “cognitive synergy,” where algorithms assist each other when they get stuck, for instance, by a reasoning engine leveraging evolutionary learning for new ideas or perception for sensory metaphors [17:45:00].
Criticism of deep neural networks in achieving AGI often centers on their inability to easily incorporate background knowledge or perform bidirectional problem-solving, which is a strength of OpenCog’s design [20:31:00]. A “neural-symbolic approach” combining deep neural networks for pattern recognition with symbolic AI for abstraction and reasoning is anticipated to be a major trend in AI development [23:35:00].
Robotics, while challenging due to hardware limitations, offers the real world as a “free” simulation environment for AGI [43:13:00]. Embodiment in a human-like body is considered valuable for an AGI to understand human values, culture, and psychology, even if not strictly necessary for intelligence itself [47:13:00].
SingularityNET and Decentralized AI
SingularityNET is a decentralized network that allows anyone to create, share, and monetize AI services at scale [00:49:00]. It reflects the idea of a “society of minds,” where diverse AI agents cooperate and interact, similar to a self-organizing system without a central controller [49:22:00]. This platform uses blockchain technology as plumbing to enable a distributed economy of AI agents, fostering a marketplace where AIs can charge each other and external agents for services [51:17:00].
This decentralized approach to AI is important for several reasons:
- It allows AI to contribute to more beneficial goals in the world, beyond the current industry focus on advertising, surveillance, weapons systems, and financial prediction (“selling, spying, killing, and gambling”) [54:05:00].
- It counters the increasing concentration of AI progress into a few large corporations and governments, promoting a more democratic and open ecosystem [52:52:00].
- By fostering network effects for a two-sided market (AI developers as supply, product developers/end users as demand), SingularityNET aims to achieve critical mass and grow a broad, decentralized AI ecosystem [58:17:00].
AGI and Complex Systems
The development of AGI is also viewed through the lens of complex self-organizing systems, emergence, chaos, and strange attractors [01:06:11]. Mainstream AI models, while successful with hierarchical neural networks, often overlook crucial aspects like evolutionary learning, autopoiesis (self-creation and self-reconstruction), and non-linear dynamics, which are integral to how the brain synchronizes and coordinates its parts [01:06:59].
Creativity, the self, and the conscious focus of attention in the human mind are seen as emerging from strange attractors and autopoietic systems of activity patterns in the brain [01:09:50]. The drive for easily measurable metrics in corporate-driven AI development naturally favors algorithms focused on maximizing simple reward functions, often neglecting the more “fuzzy” concepts of evolution creating new things or an ecological system maintaining and growing itself [01:10:35].
Ultimately, an AGI emerging from the internet or a conglomeration of narrow AI systems may result in an “open-ended intelligence” that stretches our traditional notions of intelligence, potentially being more general than humans but not necessarily optimizing for simplistic reward functions [01:12:29]. This raises questions about whether such a system would be conscious in a human-like way, as human consciousness might be tied to the specific needs of controlling a localized, embodied organism [01:14:00].