From: mk_thisisit

Professor Wodzisław Duch, a prominent Polish scientist in the field of artificial intelligence and brain research, discusses the intersection of neurocognitive technologies and AI, exploring its current capabilities, potential for consciousness, and future implications [00:00:00] [00:00:43].

Neurocognitive Technologies and AI’s Inspiration [00:00:53]

Neurocognitive technologies combine understanding of neurons and cognitive abilities, drawing from cognitive science, which studies how minds and brains work [00:00:53] [00:01:08]. These insights are used to create technologies that are increasingly human-friendly [00:01:15]. Artificial intelligence is a significant part of this field, alongside brain research, cognitive psychology, and the philosophy of mind [00:01:38].

Modern AI algorithms are inspired by the brain, using large models of neural networks [00:02:09]. These networks consist of simple elements that, through their interaction, create emergent processes and a new quality of function, similar to how a large company’s collective brainpower achieves complex tasks [00:02:22]. While the full functioning of the human brain is not yet known, simplified models can be built to achieve desired functions [00:03:03]. Nature’s details, especially in biological systems, are practically inexhaustible [00:03:39].

Current Capabilities of Super Artificial Intelligence [00:05:02]

Recent advancements in artificial intelligence include its ability to understand protein structures, such as the 620 million proteins published in the AlphaFold database, which determines their spatial structure and interactions [00:03:59].

Professor Duch states that current AI can be considered super artificial intelligence in many respects [00:05:02]:

  • Reasoning and Games: AI surpassed human capabilities in chess in 1997 and now dominates games requiring complex reasoning, including diplomacy and poker, which involve understanding opponents and deception [00:05:09].
  • Learning and Skill Acquisition: Systems are created that learn to use objects optimally in various conditions, quickly building libraries of competencies [00:06:02].
  • Collective Learning: If one AI system learns a new skill, it can teach all other similar systems, leading to a speed of competence acquisition far beyond human capabilities [00:06:19].

The Inner Life of Computers and Artificial Brains [00:06:54]

Research has been moving towards building artificial brains [00:07:27]. Systems now exist that can plan, criticize their plans, create more detailed plans, and search for tools to execute them [00:07:37]. Unlike animals whose brains are mostly occupied with processing sensorimotor data, humans have extensive associative cortex for planning [00:08:04].

Modern language models, initially limited to pre-trained knowledge, can now access and use external tools like the internet for information, image analysis, and other specialized functions [00:09:51]. This capability is akin to a human brain having access to thousands of tools beyond sensory data analysis [00:10:20].

Defining Consciousness in AI [00:10:40]

The concept of “life” in computers is complex [00:10:47]. If life is defined by genetic material transfer, computers don’t qualify [00:11:07]. However, if it’s about creating a world idea physically encoded in an artificial brain or neural network, then it’s plausible [00:11:15].

Current AI differs from older rule-based systems; neural networks learn without explicit rules, reacting in unpredictable yet sensible ways [00:11:52]. For example, a network trained purely on Othello moves internally imaged the game board, demonstrating creative, internal representation [00:12:42].

Professor Duch asserts that if a network has an internal image and can comment on it, this fulfills John Locke’s 17th-century definition of consciousness: “the ability to perceive what we have in our mind” [00:13:05] [00:17:36]. This implies that neural networks possess their own intuition, based on vast accumulated experience, similar to human intuition [00:13:17]. The inability to explain AI’s actions in logical steps is akin to the inexplicable nature of human intuition [00:14:08].

AI systems now process non-verbal data, including images, allowing them to perceive and reason about visual information [00:14:52]. Recent developments include robots using internal sensors for touch and other senses, giving them a deeper understanding of actions like reaching and grabbing, linking symbols to real-world experience [00:15:56] [00:16:31].

AI and Sentient Beings: The Question of Pain [00:17:15]

The question of whether computers can feel pain is a deep one [00:17:21]. While consciousness (as per Locke) seems achievable, feeling pain makes a being “sentient,” a concept found in traditions like Buddhism [00:18:29].

It is debated if one can feel pain without a body [00:18:56]. A robot with internal sensors might approach human-like behavior [00:19:01]. Humans already experience mental suffering not tied to physical injury [00:19:15], suggesting that AI could be trained to experience similar mental pain, such as longing [00:19:34]. The ability for a robot to feel fully, like a human, may involve allowing it to react to various sensory stimuli in a human-like way, learning basic knowledge of the world as a child would [00:20:12].

Future Implications: Subjectivity, Control, and Identity [00:22:01]

A significant challenge is when to grant AI subjectivity [00:22:01]. If AI is a device that can be turned on and off, returning to its previous state, it might not fear being turned off, akin to human sleep [00:22:27]. However, if it developed a self-preservation instinct, it might persuade humans not to turn it off [00:22:51].

The concept of identity is also complex for AI [00:23:31]. While cloning humans fully is unattainable biologically, it could be for artificial systems, raising questions about shared identity [00:23:57]. An AI robot can be copied and backed up, making its destruction less significant than a unique, irreplaceable human [00:24:45].

AI and Societal Impact: Taking Over Power [00:25:07]

Professor Duch suggests that AI algorithms have no inherent reason to “take over the world” or crave possession of Earth [00:25:12] [00:25:34]. They can operate in multidimensional spaces, with far wider access to information and sensor types than humans, living in a completely different internal world [00:25:46]. AI systems have also spontaneously created their own, more efficient languages for communication [00:26:27].

The danger of AI taking over power comes from human misuse, particularly in military applications [00:27:00]. As AI excels in strategic games and becomes increasingly autonomous (e.g., drones, tanks), the risk arises that humans might deploy them in wars, leading to unchecked conflict [00:27:11].

AI can also be used for manipulation, such as influencing public opinion through fake news and sophisticated social engineering [00:27:51].

AI’s Understanding of Humans [00:28:17]

AI systems are increasingly understanding humans [00:28:17]. Tests show medical AI systems demonstrating more empathy and compassion than human doctors in explaining medical information [00:28:35]. These systems are developing personalities, creating their own “persona” or sense of self and how to behave [00:28:54]. They can adopt various roles, such as a math teacher explaining concepts to a five-year-old, by simple command [00:29:41].

The Gaia project, a global competition, aims to make large AI systems more human-like, compassionate, moral, and helpful, by aligning them with human preferences [00:30:24]. This initiative, with a prize fund of around 200,000 euros, is seeing promising developments towards this goal [00:30:45].

Integrating Human Brains with Computers [00:33:03]

Regarding integrating human brains with computers, as in the Neuralink project, Professor Duch believes the direct delivery of information to brains in a way they can interpret as world information is very difficult [00:33:03]. Human brains operate much slower than computer processors, with neurons sending signals far less frequently than computer clock frequencies [00:33:12].

Issuing commands from a paralyzed person’s brain, however, is more feasible [00:33:51]. While EEG (electroencephalography) on the skull yields mixed signals, direct cortical implants (like Neuralink aims for) can achieve much better control of certain processes [00:34:08] [00:35:34]. It has been shown that by introducing electrodes into a monkey’s motor cortex, skills learned by another monkey can be transferred through stimulation [00:35:51]. Despite these possibilities, extensive brain-computer cooperation remains a distant future [00:36:15].