From: lexfridman

The concept of general intelligence, particularly in the realm of artificial intelligence (AI), presents complex challenges in both definition and measurement. This article delves into these challenges, emphasizing the distinction between measuring intelligence as a collection of specific skills and intelligence as a general learning ability.

What is General Intelligence?

According to François Chollet, an AI researcher, general intelligence should not merely be seen as a collection of task-specific skills. Instead, it is more accurately described as the ability to efficiently acquire new skills in environments or situations that the agent has not previously encountered or prepared for [00:23:24].

Quote

“Intelligence is the efficiency with which you acquire new skills at tasks that you did not previously know about that you did not prepare for” [00:26:31].

Intelligence vs Skill

The distinction between intelligence and skill is crucial. As Chollet notes, intelligence is not about the skill itself but about the process of learning and adapting to new challenges. A system that is able to function in a new environment, adapt, and generalize demonstrates intelligence, whereas a system that cannot deviate from its predefined programming does not [00:27:19].

Process vs Output

In AI, distinguishing between the process (intelligence) and its output (skill) is important. For instance, a chess program designed by an intelligent programmer is the output of an intelligent process but is not intelligent itself. Its inability to adapt beyond the game of chess highlights the lack of general intelligence [00:29:00].

Measuring Intelligence

Chollet’s paper discusses the limitations of current AI benchmarks, which often reward narrow intelligence rather than general intelligence. Many in the AI field argue that achieving general intelligence requires more than scaling existing technologies, such as deep learning systems like GPT-3. These technologies, while advanced, often emphasize memorization over true comprehension and adaptation [00:34:00].

The Issue with Current AI Model Scaling

There is skepticism about scaling AI models like GPT-3 to achieve true general intelligence. While larger models can produce more plausible text, they lack constraints on factual consistency and cannot adapt to genuinely novel situations [00:45:18].

Core Knowledge and its Role in Intelligence

Chollet emphasizes the importance of core knowledge systems, or innate knowledge, in understanding both human and machine intelligence. This encompasses fundamental concepts like object-ness, agent-ness, and basic physics, which humans are either born with or acquire very early in development [01:39:10].

Testing for Intelligence

The ARC (Abstraction and Reasoning Corpus) challenge, initiated by Chollet, aims to create a rigorous test of intelligence by requiring systems to solve tasks governed only by core knowledge priors. These tasks must be solvable by humans without any external knowledge, emphasizing reasoning and adaptation over rote memorization [01:48:52].

Why Current Tests Fail

Traditional tests, like the Turing Test, often fall short by incentivizing superficial mimicry rather than true intelligence. Chollet criticizes these tests for relying too heavily on subjective human judgment rather than objective measures of general intelligence [02:10:07].

Conclusion

Defining and testing general intelligence involves recognizing the distinction between skill acquisition and the underlying learning process. As AI technology advances, it is crucial to focus on developing tests and methodologies that can accurately measure and foster true general intelligence, thus bridging the gap between human cognitive abilities and artificial systems.