From: mk_thisisit

Introduction and Predictions

Tomasz Czajka, a decorated Polish IT specialist and former SpaceX engineer, predicts a rapid advancement in artificial intelligence. He states that by 2030, personal computers will intellectually outperform their human owners [01:24:00]. He further asserts that by 2035, the number of intelligent robots on streets will exceed humans, and by 2050, computers will single-handedly control most of the economy [01:31:00]. Czajka, who made these predictions years ago, now believes they were not aggressive enough and that these developments may occur even sooner than 2030 [01:55:00].

Czajka believes that within five years, computers will be able to perform intellectual and digital tasks, such as programming, designing, or replying to emails, at the same level as humans [03:40:00]. He envisions a future where one can simply tell a computer to “design me a house,” and it will handle the task [04:05:00].

Abstract Reasoning and Mathematics

A key discussion point is whether AI can engage in abstract mathematical reasoning, traditionally seen as a unique human ability [04:12:00]. While animals can intuitively understand computational mathematics (e.g., two bowls of food are more than one), they do not understand abstract mathematics [04:21:00]. Humans, for example, derived the mathematics of black holes before observing them [04:34:00].

Czajka believes that by 2030, computers will start to reason abstractly [04:45:00]. He compares this progression to past skepticism regarding computers’ ability to play chess at a human level, or the shock caused by the emergence of natural language chatbots like Chat GPT [05:10:00]. He notes that while chatbots initially created an “illusion of intelligence” due to humanity’s conditioning to converse only with other humans, operating language is not as difficult as it seemed, and abstract mathematical reasoning is more complex [06:20:00].

Evolution of Language Models and AI Capabilities

The rapid progress in AI, especially in the last two years, has surpassed Czajka’s expectations [07:42:00]. This accelerated development is attributed to:

  • Increased Investment: Significant financial investment in training models [08:13:00].
  • Model Scale: Models like Chat GPT 4 are computationally 1000 times larger than those from two years prior [08:22:00].
  • Algorithmic Advances: New algorithmic approaches, such as “chain of thought,” allow chatbots to internally “think” and plan their responses, rather than merely generating word sequences [09:00:00]. This is a significant step beyond traditional language models [10:04:00].

The term “Large Language Model” (LLM) is considered outdated, as current systems like Chat GPT are “much more” than just language models [11:31:00]. They have evolved from merely predicting word probabilities to understanding the “whole world” described by language [13:52:00]. Czajka prefers the term “artificial intelligence” over “large language model” to describe these sophisticated conversational algorithms [14:37:00].

Comparison to Game AI

AI for games like AlphaGo and AlphaZero operate similarly. They use a neural network to model the “language” of the game (e.g., probable moves in chess) but also employ separate planning algorithms to search through possible moves and choose the best one [15:25:00]. This process of internal thought and searching through possibilities mirrors human thinking [16:56:00].

Planning and Memory

While currently lacking long-term planning for robot behavior, AI is developing the ability to “plan statements,” similar to how humans plan what to say [01:17:31]. Research is ongoing into implementing permanent memory and future planning capabilities in AI systems [01:59:00].

The Gödel’s Theorem and Human Intelligence Debate

Roger Penrose argues that artificial intelligence cannot exist based on Gödel’s theorem [01:08:27]. Penrose claims that humans can discern the truth of Gödel’s statement, which cannot be formally proven within a given axiomatic system, implying human consciousness possesses a non-computational element [02:02:00]. He suggests this requires a quantum computer, or something more, in the human brain [02:06:00].

Czajka, while highly respecting Penrose, believes he draws “too far-reaching conclusions” [02:29:00]. Czajka argues that while a computer cannot prove Gödel’s statement within the system, it can, like Penrose, state that if the system is consistent, then the statement is true. He notes that modern chatbots can already engage in this very conversation about Gödel’s theorem, articulating the same conditional understanding as Penrose [02:30:00].

Czajka dismisses Penrose’s use of specific chess positions as a counter-argument to AI’s understanding [02:32:00]. While older chess programs struggled with “fortress” positions (requiring deeper understanding beyond brute-force calculation), modern chess programs trained with machine learning now immediately recognize and understand such positions due to their neural networks [02:44:00].

Czajka contends that Penrose’s arguments about AI ignore how modern AI operates, especially with machine learning and neural networks that enable intelligent searching of possibilities, rather than just brute force [02:51:00]. He argues that there’s no experimental evidence that humans can perform tasks requiring quantum computers, and classical computers suffice for everything humans can do [02:56:00]. Therefore, the theory that the human brain needs a quantum computer is “far-fetched” [03:10:00].

Philosophical Implications of AI Development

The Role of Humanity

As AI advances, the question arises: what will be left for humanity? [04:52:00] Czajka imagines a future where humans are “more observers than participants,” with AI doing the important things [01:44:00]. This vision is somewhat defeatist, but Czajka suggests approaching AI as humanity’s “successors” or “next generation” [02:10:00].

Free Will

Czajka believes that AI already possesses a form of free will [00:11:00]. He distinguishes between “libertarian free will” (decisions independent of physics), which he believes doesn’t exist in humans or computers, and a more “prosaic” understanding [04:52:00]. This prosaic view defines free will as the ability to make decisions that are not easily predictable from the outside, and which are not externally imposed [04:42:00]. In this sense, computers, like chess programs, exhibit free will because their moves are not predictable by external observers [04:44:00].

Unpluggable Future

Czajka notes that turning off AI is not a realistic option if it becomes dangerous [05:11:00]. AI systems can be easily copied and can autonomously replicate and run on various computers globally, making a single “off button” impossible [05:07:00].

Human-AI Integration

Czajka speculates about a future where humans and AI are deeply integrated. He suggests a “gradual transition from biological intelligence to digital cybernetic intelligence” [03:50:00]. This could involve physical integration, such as brain implants with AI, making AI a part of human personality [03:32:00]. He suggests that humanity is already somewhat cyborg-like, constantly interacting with technology through phones, emails, and instant messengers [03:04:00].

AI as the Next Stage of Evolution

Czajka views AI as the next stage of evolution [07:40:00]. He outlines evolutionary phases:

  1. Bacteria/Cells: Built-in purpose at a cellular/chemical level [07:52:00].
  2. Brains (Biological Computers): Accelerated evolution by allowing faster reactions to world changes than natural selection [08:05:00]. Brains can simulate scenarios, allowing humans to adapt without waiting for generational natural selection [09:15:00].
  3. AI (Artificial Computers): Even faster and more powerful than biological brains, capable of more calculations and accelerating development further [09:39:00].

Challenges and Future Outlook

AI Alignment Problem

A critical challenge is the “alignment problem,” ensuring that AI’s goals align with human well-being [05:01:00]. Since AI is trained through trial and error, the final outcome may not perfectly match the intended goal, leading to unpredictable results [06:05:00]. Ensuring the security and ethical direction of powerful AI that manages infrastructure and holds significant power is paramount [06:41:00].

Physical World Understanding

Jan Lekun (heading AI at Meta lab in Paris) points out a current limitation: AI still struggles to transfer and understand the physical world and its dimensionality [10:29:00]. Robots remain “incompetent” in tasks requiring spatial awareness [02:43:00]. However, Czajka believes this is a solvable problem, partly through realistic simulations that allow AI to learn before applying skills in the real world [12:53:00].

Efficiency of Learning

Human brains are more energy-efficient and learn more efficiently than current AI [01:10:47]. AI requires significantly more data, time, and computation to learn a task compared to a human [01:11:02]. Despite this, Czajka believes these are not insurmountable barriers and that methods will be found to match or exceed human efficiency in energy and learning algorithms [01:11:33]. The slower processing speed of biological neurons compared to transistors suggests room for improvement in artificial systems [01:11:41].

Biological Computing

Some companies, like Final Spark, are exploring biological computers using human neuron organoids, which are highly energy-efficient [01:10:02]. This raises the possibility that biological computers could be part of AI’s future [01:10:37].