From: lexfridman

Predicates and invariants are terms frequently encountered in discussions about machine learning, particularly concerning the pursuit of understanding intelligence. This article delves into these concepts, drawing insights from a conversation with Vladimir Vapnik, a pioneering figure in statistical learning and the co-inventor of Support Vector Machines.

Engineering vs. Science of Intelligence

Vladimir Vapnik distinguishes between two paths in the development of intelligence: the engineering approach and the scientific approach. The engineering approach focuses on creating devices that mimic human behavior, while the scientific approach seeks to understand the essence of intelligence. Vapnik emphasizes that these are fundamentally different goals [00:03:11].

Predicates in the Understanding of Intelligence

Vapnik draws on the work of Vladimir Propp, who identified 31 predicates in Russian folk tales that define storytelling structures. Vapnik suggests that predicates are crucial for understanding human behavior, not only in literature but also in tasks like digit recognition [00:07:17]. He argues that intelligence involves recognizing such predicates and invariants that apply to broad scenarios, transcending specific tasks like image and digit recognition [00:10:55].

Definition of a Predicate

According to Vapnik, a predicate can be formally defined as a function that derives certain properties or invariants from data. For instance, symmetry is considered a predicate in two-dimensional image recognition [00:10:01].

The Role of Predicates in Machine Learning

The discussion highlights a fundamental question: can learning to imitate intelligence through predicates lead us closer to understanding intelligence? Vapnik suggests that understanding might be limited to discovering predicates that can reduce the search space of possible functions [00:46:02]. He believes that while current machine learning systems rely heavily on empirical data, the real challenge is developing systems that use far fewer examples by leveraging strong predicates [00:47:02].

Challenges in Discovering Predicates

One key challenge is determining useful predicates from an infinite sea of potential ones. Vapnik argues that discovering such predicates is akin to scientific breakthroughs in physics—finding core principles that can explain a wide range of phenomena. He suggests automated methods could help in predicate discovery, though human intuition currently plays a significant role [00:29:02].

Implications for Machine Learning

Understanding and applying predicates might significantly reduce the amount of data required for learning tasks, such as handwritten digit recognition. Vapnik presents a challenge to the field: achieve state-of-the-art results on tasks like MNIST with significantly fewer training samples [00:48:01].

The incorporation of predicates and invariants could also impact related fields, offering insights into concepts such as Bias and Variance, and Security and Fairness in machine learning.

Conclusion

Predicates and invariants represent a potentially powerful paradigm in machine learning that could lead to more efficient learning processes by reducing dependency on large datasets and focusing on core conceptual underpinnings of intelligence. Vladimir Vapnik’s insights reinforce the importance of philosophical and theoretical exploration in parallel with empirical advancements, highlighting the ongoing exploration of Causal Inference and the nature of intelligence itself.