From: lexfridman
Achieving general artificial intelligence (AGI) remains one of the most profound and challenging goals in computer science and cognitive psychology. The endeavor is not just about creating machines that can solve specific tasks but constructing systems that match or exceed the dynamic capabilities of human intelligence in general-purpose understanding and learning.
Historical Context and the Quest for Common Sense
Achieving AGI involves the acquisition of what is known as common sense knowledge — the myriad of understanding that people accumulate through everyday experiences. Pioneers like Doug Lenat, with his project Psych, have been pursuing this since 1984 [01:15:00]. The goal of Psych is to assemble a comprehensive knowledge base that spans basic concepts about how the world works, which is critical for an AI to not be “brittle” — vulnerable to errors due to lacking context or understanding [00:02:00].
The challenge is formidable because computers traditionally lack an inherent understanding of the world, akin to a dog fetching the newspaper without understanding what a newspaper signifies [00:02:35].
Knowledge Representation and Reasoning
The essence of achieving AGI lies in how knowledge is represented and how reasoning is implemented. In traditional AI, inference and knowledge representation have often been separated into a theoretical epistemological problem — what the system should know — and a heuristic problem — how efficiently it can use that knowledge [02:00:46].
Projects like Psych use a logical language to represent knowledge about the world, enabling mechanical procedures to mimic human-like reasoning [00:08:24]. However, systems must reconcile both theoretical reasoning capabilities and the ability to “think slowly” as well as efficiently to handle real-time scenarios [02:01:52].
Challenges in Achieving AGI
One critical insight is that capturing common sense understanding — a goal of achieving AGI — requires tens of millions of knowledge pieces encoded in a way that machines can process similarly to human reasoning [00:17:00]. Yet, inserting such vast quantities of knowledge, as Doug Lenat notes, could potentially be manually entered over a span of person-centuries, raising challenges in human effort and time investment [00:12:08].
Currently, the approach emphasizes combining manual efforts with automated processes like natural language understanding and logical abduction — making educated guesses from incomplete information [00:52:03].
Bridging the Gap with Machine Learning
The relationship between deep learning and AGI is often viewed as complementary. While deep learning excels at pattern recognition — a task akin to the functions of the human right brain hemisphere — AGI requires “left brain” reasoning capabilities [00:54:21]. This duality remains a critical synergy for developing AI systems that aspire to general intelligence.
Philosophical and Ethical Dimensions
The pursuit of AGI also echoes profound philosophical questions about the nature of intelligence and consciousness. It poses questions like: Do machines need to be conscious to be considered truly intelligent? How will ethical considerations be incorporated into AGI development? The importance of understanding and moral reasoning becomes paramount when deploying AGI in real-world applications, such as autonomous vehicles [02:34:42].
Towards the Future of AGI
Doug Lenat’s work suggests that by thoroughly building foundational knowledge and inference capabilities, we can prime AGI for further self-improvement and learning, drawing parallels to human cognitive development [00:37:31]. While the path toward achieving AGI is intricate and fraught with technical and ethical challenges, it remains one of the most inspiring pursuits in the field of artificial intelligence.
The Growing Influence of AGI
The potential realization of AGI promises a future where human cognition is augmented, enabling novel solutions to global challenges and reshaping the landscape of human knowledge and intelligence.