From: mk_thisisit
The prevailing view among many, including Nobel Prize winner Roger Penrose, is that the term “artificial intelligence” is a misnomer, as true intelligence requires consciousness [00:00:04], [00:02:04], [00:13:56]. Penrose asserts that computers are fundamentally computational machines, based on a “very limited part of mathematics” [00:00:17], and therefore will never gain consciousness [00:00:24]. He believes that consciousness is not computational [00:00:30], a core argument against the possibility of machines achieving genuine intelligence or even superintelligence [00:13:47].
The Problem with “Artificial Intelligence”
The speaker argues that current AI systems, while powerful, “do not know what it is doing” [00:00:35], [00:09:02], [00:10:45]. They simply perform calculations [00:02:23], lacking genuine understanding [00:10:49]. This perspective is summarized in the book The Emperor’s New Mind [00:11:10].
Instead of “artificial intelligence”, the term “artificial cleverness” might be more appropriate [00:14:09]. Just as some mathematics students are merely clever at repeating what they’ve learned without true understanding [00:14:18], AI excels at complex computations and pattern analysis but lacks awareness or comprehension [00:15:44].
Gödel’s Theorem and Uncomputability
The foundation of this argument stems from Penrose’s studies in mathematical logic, where he encountered Gödel’s theorem and the concept of computability [00:03:12].
Understanding Gödel’s Theorem
Gödel’s theorem is described as “incredible” [00:03:28] because it indicates “things whose understanding goes beyond their use” [00:03:31].
The theorem essentially states that within any consistent axiomatic system capable of basic arithmetic, there will always be true statements that cannot be proven within that system [00:03:50], [00:06:02]. This is achieved by constructing a statement that effectively says: “You cannot prove me using these rules” [00:06:09]. If the statement were false, it could be proven, which would make it true, leading to a contradiction. Therefore, the statement must be true, but unprovable by the given rules [00:06:26].
Computability and Uncomputability
A key aspect of Gödel’s theorem is its implication for “computability” [00:05:09]. Computable means something can be processed by a computing machine [00:05:09], as defined by Alan Turing or others like Church and Curry, whose definitions were all found to be equivalent [00:05:17].
However, Gödel’s theorem demonstrates that there are “uncomputable things in mathematics” [00:05:02], [00:17:04]. A significant portion of mathematics extends “far beyond computability” [00:16:17] and cannot be calculated using an algorithm [00:16:24].
AI’s Inability to Transcend Rules
AI operates based on computational principles [00:04:49]. However, it cannot “create his own rules” [00:07:05] because it doesn’t “know that they are true” [00:07:13]. The core of Gödel’s theorem is about going “beyond the rules” [00:07:22] by “understanding why they are true” [00:07:25].
This understanding requires “awareness” [00:07:54] and knowing “what you are doing” [00:07:57], which implies consciousness [00:08:17]. Consciousness is what allows humans to “transcend the rules” [00:08:25] by grasping the reasons for their truth [00:08:31].
Consciousness and Uncomputable Physics
The speaker posits that consciousness is a “physical process” [00:23:49] linked to a type of physics “we probably do not know yet” [00:18:44]. This unknown physics is characterized by “uncomputability” [00:14:38].
While current physics, like general relativity, can be calculated [00:12:29], the physics related to conscious thinking is “in principle uncalculated” [00:12:57] and “cannot be calculated” [00:12:16]. This means that although AI may try hard, it cannot achieve consciousness because it is based on computable processes [00:13:06].
The Role of Quantum Mechanics
Penrose suggests that this deeper understanding lies in the quantum world [00:18:56], specifically the “collapse of the wave function” [00:23:57]. He views quantum theory as “fundamentally incomplete and even in a sense wrong” [00:19:10] because it assumes quantum superpositions hold at all levels [00:19:16].
While a universal quantum computer might one day create synthetic consciousness [00:29:26], it would require understanding how the quantum world works beyond its currently computable elements [00:19:00]. The quantum world itself exhibits peculiar behaviors, such as “backward causality” [00:20:04] and phenomena like the EPR paradox, which cannot be fully understood through classical reality alone [00:27:04], [00:28:53].
Nobel Prizes and the Definition of Physics
The speaker questions whether recent Nobel Prizes in physics, awarded for contributions to fields like synthetic neural networks (Jeffrey Hinton) [00:20:00] or the creators of AI and DeepMind (Denis Hassabis, David Baker) [00:01:36], truly represent physics. He suggests these might be technological advancements rather than fundamental theoretical physics [00:20:41].
In conclusion, the speaker argues that current AI systems are powerful computational tools, but their fundamental nature as machines based on computable mathematics means they lack consciousness and the capacity for true understanding, as illuminated by the principles of Gödel’s theorem and the uncomputable aspects of reality.