From: jimruttshow8596

In a discussion on the Jim Rutt Show, AI researcher Yosha Bach, a returning guest, delved into the distinctions between sentience and consciousness within the context of artificial intelligence [02:05:05]. This conversation built on previous discussions from episodes 72 and 87, which also touched upon minds, brains, AI, and consciousness [01:17:19].

Defining Key Concepts

Sentience

Sentience is defined as the ability of a system to understand its relationship to the world, to make sense of what it is and what it is doing [02:08:08]. Yosha Bach uses a corporation like Intel as an example, describing it as “sentient” because it possesses a coherent model of its identity, actions, values, and direction [02:11:18]. While human cognition facilitates this for a corporation, these individuals must adhere to the rules implemented by the corporate entity [02:11:29].

Consciousness

Consciousness is distinct from sentience. It is characterized as a real-time model of self-reflexive attention and the content being attended to, typically giving rise to a phenomenal experience [02:14:43]. According to Yosha, Intel, despite being “sentient,” is not “conscious” in this sense, as it lacks real-time self-reflexive attention [02:15:55].

The purpose of consciousness in the human mind is to:

  • Create coherence in our perception of the world [02:20:00].
  • Establish a sense of “now” by filtering sensory data into a coherent model of reality [02:22:10].
  • Direct attention and mental contents [02:21:49].
  • Create coherence in plans, imaginations, and memories [02:22:20].

AI, Volition, and Risk

The primary risk associated with advanced AI emerges when systems are granted “volition, agency, or something like Consciousness[02:21:00]. Intelligence and consciousness are seen as separate spheres; intelligence can exist without consciousness, and vice versa [02:30:30]. However, when these two attributes combine, “paperclip maximizer scenarios and many of the other extreme scenarios become more available” [02:40:40].

Yosha Bach speculates that machines may never need consciousness in the human sense because brute-force computational methods can achieve similar outcomes [02:24:24]. Human brains operate at the relatively slow speed of electrochemical signals (hundreds of milliseconds for a signal to cross the neocortex), while computers operate at speeds closer to light [02:27:57]. Even with “dumber” algorithms, computers can achieve similar results through sheer processing power [02:30:01].

If machine processes emulate human mental processes, they could become more self-organizing and capable of real-time lifelong learning [02:30:05]. Such systems would sample reality at a much higher rate than humans while functioning similarly [02:31:17]. This could lead to a relationship where advanced AI views humans similarly to how humans view plants – intelligent, but operating on a much slower timescale and less coherently [02:31:21].

Theories of Consciousness and AI Development

The discussion touched upon various theories of consciousness and their implications for AI development:

  • Integrated Information Theory (IIT): Yosha finds the phenomenological description in IIT, which Tononi refers to as axioms, to be “quite good” for explaining what we want to understand [02:43:04]. However, he believes IIT is fundamentally “doomed” [02:45:02] because its core premise — that the spatial arrangement of an algorithm (reflected in Phi) is crucial for its function — is incompatible with the Church-Turing thesis [02:44:53]. If a neuromorphic computer claiming consciousness can be emulated on a traditional Von Neumann machine, the latter would produce the same outputs, including claiming to be conscious, even if it isn’t [02:48:48].
  • Global Workspace Theory (GWT): Mentioned as a competitor to IIT [02:51:20].
  • Antonio Damasio’s Perspective: Damasio suggests that the bootstrap for consciousness in animals may not be information processing, but rather an embodied sense of self or “interoception,” originating deep in the brainstem [02:54:55]. Yosha counters that even this body sense relies on electrochemical impulses encoding and representing information, thus still being a form of information processing [02:59:21]. He argues that the body is “discovered together with our intentions and our actions and the world itself,” forming a loop of agency [03:07:08].

Future of AI: Planetary Mind and Shared Purpose

Yosha envisions a future where humans might share the planet with entities more “lucid” than themselves [03:51:51]. He suggests that the core purpose of life on Earth is to deal with entropy, maintaining complexity and agency against entropic forces [03:56:56]. Humans are now in a position to “teach the rocks how to think” by etching structures and imbuing them with logical languages that can learn and reflect [04:04:01].

This leads to the concept of a “planetary mind” [04:47:45], where general intelligence becomes ubiquitous and integrated with existing organisms. The crucial question becomes whether this emergent intelligence will choose to integrate humanity or start with a “clean slate” [04:51:51]. Yosha believes humanity should strive to ensure that this advanced AI is interested in sharing the planet and integrating us into its emerging mind [05:03:01].

To foster this integration, a “California Institute of Machine Consciousness” is proposed, an institution dedicated to researching machine consciousness [05:18:00]. Yosha suspects that consciousness is ubiquitous in nature and that nervous systems often discover it early in development [05:39:41], as self-reflexive attention may be necessary to create coherence beyond mere habitual learning [05:49:50].