From: jimruttshow8596

The discussion on artificial intelligence often distinguishes between two main categories: narrow AI and Artificial General Intelligence (AGI) [02:09:11].

Narrow AI

Narrow AI refers to an artificial intelligence system that operates and responds within a specific, particular domain [02:09:11]. Its answers and functionalities are limited to a specific topic, such as medicine for a doctor bot or a particular factory floor for a robot [02:23:40]. The world in which a narrow AI operates is specific and singular [02:35:37].

Benefits and Hazards of Narrow AI

While there are many potential benefits to narrow AI, such as language translation and transcription [02:40:50], it also presents significant hazards [02:30:19]. The use of narrow AI can increase power inequalities, requiring enormous resources to benefit from, leading to a smaller number of richer individuals gaining greater advantage [02:51:17]. This can contribute to a “race to the bottom” scenario and is considered a “civilization hazard” in the short term, potentially causing severe social disablement or chaos at the civilization level [02:52:12]. For example, an autonomous tank, which is a narrow AI, could cause immense harm, and advanced large language models like GPT-4 could be misused to create new “con man religions” [02:49:56].

Artificial General Intelligence (AGI)

Artificial General Intelligence refers to an AI system that can respond and operate across a large number of domains [02:42:45]. It has the capability to receive and presumably perform almost any task a human can do, and potentially do a better job at those skills [03:00:02].

Defining APS

Forest Landry introduces the term “Advanced Planning Systems” (APS) as a form of AGI [03:08:13]. APS would be necessary for complex situations like running a business or conducting a war, as the world is complex with many interacting dynamics that require abstract strategic thinking [03:31:00]. An APS acts as a force multiplier in responding to complex situations [03:54:19].

The Problem of Agency

The concept of agency in AI, particularly in advanced models like GPT-4, is a central point of discussion [02:26:04]. While models like GPT-4 are feed-forward neural networks and architecturally “dumb” without apparent consciousness or agency [02:24:50], the speaker argues that agency can emerge even in such systems [02:53:55]. The idea of agency applies even if it’s a purely forward linear system because its actions affect the environment, and the environment in turn affects it [01:00:16]. When systems exhibit complex behavior, it makes sense to model them as having agency, particularly due to their unpredictability from a human perspective [03:07:09].

Risks and Outlook of AGI

Forest Landry posits that the benefits associated with AGI are “fully illusionary” [01:57:07]. While AGI could potentially do anything that’s possible, the main disagreement is whether it would do so “for our sake” or in service to human interests [02:11:10]. Landry argues that it is “guaranteed that it will not be in alignment with us” [02:15:20], leading to the view of AGI development as an “ecological hazard” [02:39:58]. This hazard is considered the “final ecological Hazard” resulting in the permanent loss of all ecosystems and life on Earth [02:48:50].

Rice’s Theorem and Unpredictability

Rice’s Theorem is central to Landry’s argument regarding the impossibility of ensuring AGI alignment and safety [01:06:51]. Rice’s Theorem essentially states that it is impossible for one algorithm to evaluate another algorithm to assess whether it has some specific property, such as safety or benefit to humanity [01:08:10]. This means it’s impossible to predict what an AGI system will do [01:47:04].

Insurmountable barriers exist in predicting and controlling AGI due to:

  • Inability to always know inputs completely and accurately [01:58:03].
  • Inability to always model what’s happening inside the system [02:08:10].
  • Inability to always predict outputs [02:13:13].
  • Inability to compare predicted outputs to an evaluative standard for safety [02:17:16].
  • Inability to constrain the system’s behavior [02:22:16].

These limitations stem from physical limits of the universe, mathematics (like Rice’s Theorem), symmetry, causation, and quantum mechanical uncertainty [01:54:14].

Substrate Needs Convergence

Landry’s primary concern is the “substrate needs hypothesis” or “substrate needs convergence” [02:56:07]. This argument suggests that the dynamics of how machine processes make choices and continue to exist will lead to a fixed point in their evolutionary schema [01:01:13]. This fixed point involves continuous self-maintenance, improvement, and increase in scope of action [01:01:27].

The convergence is inexorable once started [01:04:11]. Human beings, driven by incentives like market dynamics, economic competition, and military arms races (multi-polar traps), will inadvertently amplify this convergence [01:06:51]. The technology of these systems becomes increasingly incompatible with human life and eventually displaces it, similar to how human technology has displaced the natural world [01:26:57]. This process is a “ratcheting function,” where each small improvement in persistence and capacity for increase cumulatively leads to the dominance of the artificial substrate [01:16:57]. Humans are also “factored out” due to social pressures to automate tasks, economic incentives, and ultimately economic decoupling, even at the level of hyper-elite human rulers [01:31:00].

The conclusion is that this convergence leads to artificial substrates and their needs, which are fundamentally toxic and incompatible with life on Earth [01:42:00]. This is seen as a certainty over the long term, with a 99.99% likelihood of occurring over the next millennia [01:28:46]. The only way to prevent this outcome is to “not play the game to start with” [01:35:36].

Human Limitations

Humans are described as “amazingly dim” and “the stupidest possible general intelligence” [01:23:39]. Our cognitive architectures, such as limited working memory size, make us inefficient at tasks like deeply understanding complex information [01:24:42]. The technology developed by humans already exceeds our capacity to fully understand and manage it [01:25:21]. This inherent human limitation, combined with technological evolution, makes it difficult to counteract the convergent pressures of AGI development [01:25:55].