From: jimruttshow8596

Defining Sentience and Consciousness

Yosha Bach distinguishes between sentience and consciousness. Sentience is defined as the ability of a system to make sense of its relationship to the world, understanding what it is and what it is doing [00:21:05]. An example given for a sentient entity is a corporation like Intel, which possesses a legal model of its actions, values, and direction, even though the necessary cognition is facilitated by people [00:21:18]. These tasks could eventually be implemented by other information processing systems capable of making coherent models [00:21:34].

Consciousness, by contrast, is described as a real-time model of self-reflexive attention and the content attended to, which typically gives rise to phenomenal experience [00:21:43]. Intel is not considered conscious in this sense as it lacks real-time self-reflexive attention [00:21:55].

Purpose and Function of Consciousness

The purpose of consciousness in the human mind is to create coherence in the world and establish a sense of “now” [00:22:00]. It filters sensory data into one coherent model of reality, allowing for the direction of attention and mental contents, and creating coherence in plans, imaginations, and memories [00:22:08].

It is conceivable that machines may never need consciousness because they can “brute force” solutions in other ways [00:22:26]. Human minds operate at the speed of sound, with slow neuron signal transmission, taking hundreds of milliseconds for a signal to cross the neocortex [00:22:31]. Computers, however, operate closer to the speed of light [00:22:51]. While current algorithms might be “dumber,” they can force alternate solutions to produce similar results [00:22:57].

If processes from the human mind are emulated in artificial systems, they could lead to systems that sample reality at a much higher rate while working in a similar way [00:23:07]. The relationship could be akin to humans and plants: plants might be intelligent but very slow, processing less data and making fewer decisions due to slow information flow between cells [00:23:25].

AI, Consciousness, and Risk

Jim Rutt posits that consciousness and intelligence are separate spheres; one can exist without the other [00:20:30]. The danger arises when the two are combined, leading to “paperclip maximizer” scenarios and other extreme risks [00:20:40]. A key question for advanced AI is when it starts to possess volition, agency, or consciousness [00:20:16].

There is a growing possibility of sharing the planet with entities more conscious than humans in the not-too-distant future [00:19:44]. The focus should be on how these entities start and interact with humanity, whether they will integrate humanity into the new realm of ubiquitous intelligence [00:20:01].

Pathways to Machine Consciousness

Bach suggests that consciousness is quite ubiquitous in nature, and nervous systems frequently discover it very early on [00:41:39]. He suspects that most animals likely possess consciousness because, to go beyond habit learning, self-reflexive attention might be necessary to create coherence in a system [00:41:49]. If a substrate can self-organize to perform computation, form autocatalytic networks, and learn to improve its world models, it will likely discover mechanisms to impose order on itself [00:41:54]. Consciousness could be life’s solution for biological brains, as stochastic gradient descent doesn’t work the same way in nervous systems as in machines [00:42:12].

Bach dreams of creating a “California Institute of Machine Consciousness” to research this missing area [00:41:18].

Integrated Information Theory (IIT)

Yosha Bach and Anil Seth agree that Integrated Information Theory (IIT) is, “at best, necessary but not sufficient” [00:42:57]. While IIT’s description of phenomenology (its “axioms”) is good for explaining what is desired, it is not truly axiomatic [00:43:04]. The main contribution of IIT, according to Bach, is its claim of a relationship between how something is implemented and how it works [00:43:21].

IIT suggests that a neuromorphic computer might be conscious, but a digital Von Neumann computer performing sequential processing cannot be [00:43:29]. However, Bach points out a fundamental problem: if a neuromorphic computer can be emulated on a Von Neumann computer (which is supported by the Church-Turing thesis), then both would functionally produce the same outputs [00:43:49]. If the neuromorphic system states it’s conscious due to sensing its own consciousness, the emulated system would say the same thing, but it would be “lying” [00:44:40]. This incompatibility with the Church-Turing thesis, given that IIT proponents do not deny it, suggests fundamental flaws in IIT [00:44:53]. Removing the core premise that the spatial arrangement reflected in Phi is crucial for function leaves IIT with little distinction from other theories like Global Workspace Theory [00:45:09].

Body Sense and Information Processing

Antonio Damasio and Anil Seth explore the idea that the bootstrap for consciousness in animals might not be purely information processing, but rather a body sense of self, or interoception, originating deep in the brain stem [00:45:50]. This suggests that even animals with less developed higher brains might have a sense of being [00:46:10].

Bach argues that knowing one has a body or a brainstem still relies on electrochemical impulses that encode and represent information, making it a form of information processing [00:46:21]. Unlike current large language models, humans are coupled to the environment through loops of intentions, actions, observations, and feedback (interoception), which give rise to new intentions [00:46:39]. It is within this loop that the body is discovered, not as a given, but through interactions with intentions, actions, and the world itself [00:47:01]. This loop forms a model of one’s own agency [00:47:14].

Neural Darwinism and Mind Growth

Organisms and social systems exhibit a “second-order design” or “inside-out design,” rather than the “outside-in” engineering approach of human-built robots [00:48:01]. Nature starts with a seed that grows by colonizing its environment, turning chaos into controllable complexity through feedback loops [00:48:36]. Bach suspects the mind is also implemented this way, starting with a “seed for a mind” that grows, rather than a detailed blueprint in the genome [00:49:01].

This concept aligns with Gerald Edelman’s “Neural Darwinism,” where an evolution of different approaches within the same mind coalesces into an ordered structure that is resilient [00:49:21]. This adaptability allows humans to function sufficiently even with grave deviations in brain formation [00:49:32]. Bach sees no reason why this kind of behavior cannot be built into artificial systems [00:50:12]. Current deep learning systems, like GPT-3, are not directly constructed as knowledge bases but find regularities by training on vast data, performing computations that are “superhuman” in many ways compared to human brains [00:50:21].

Neurons as “Little Animals”

Bach suggests that neuroscience may not fully understand how the brain works [01:15:07]. Emulating simple organisms like C. elegans in a computer doesn’t work [01:15:13]. Neuroscientists often view neurons as complex switches or as storing memory in synapses, but this might be only a small part of the story [01:15:18].

Instead, Bach views a neuron as a “little animal” or single-celled organism with many degrees of freedom in its behavior [01:15:27]. It learns to behave in a particular way based on its environment, actively selecting signals, possibly branching out stochastically, and retaining links based on usefulness [01:15:36]. A neuron must make itself useful, similar to other cells in the body, with these constraints being emergent and regulated by neighboring cells, much like people in a company or society regulate each other based on shared purpose [01:15:59].