From: mk_thisisit

Professor Wodzisław Duch, a prominent Polish scientist in the field of artificial intelligence and brain research, discusses neurocognitive technologies, the development of AI, its capabilities, and its potential interactions with humanity [00:00:43].

Neurocognitive Technologies

Neurocognitive technologies are defined by two components: “Neuro” referring to the neurons in the brain, and “cognitive” pertaining to cognitive abilities, which are studied in cognitive science [00:00:53]. This field aims to understand how our minds function and their connection to our brains [00:01:10]. Based on this understanding, various technologies are developed to become more “human-friendly” and facilitate easier communication with people [00:01:21]. Artificial intelligence is a significant part of cognitive sciences, alongside brain research, cognitive psychology, and philosophy of mind [00:01:38].

AI Inspired by the Human Brain

AI algorithms are designed based on the human brain, creating large neural network models [00:02:09]. These networks consist of simple, unintelligent elements whose interactions generate emergent qualities, much like a complex company’s collective work surpasses individual efforts, or how individual neurons create complex intelligence [00:02:22]. The ability to build simplified models, even without full understanding of the human brain’s intricacies, allows for engineering progress in AI [00:03:16]. Nature, particularly biological systems, is inexhaustible in its details, making full comprehension impossible [00:03:41].

AI Capabilities and Super AI

Recent advancements in AI demonstrate significant progress, particularly in understanding protein structures [00:03:59]. Systems like AlphaFold have determined the spatial structure of 620 million proteins, a task previously overwhelming for human bioinformaticians working with only a handful [00:04:11].

Modern AI can be considered “super artificial intelligence” in many respects, particularly concerning reasoning abilities [00:05:02]. While AI surpassed humans in chess in 1997, current programs excel in games like poker and diplomacy, which require understanding opponents, deception, and strategy [00:05:09]. These systems can learn how to best use objects and create libraries of acquired competencies, far exceeding human acquisition rates [00:06:02]. A key advantage of AI is its ability to share learned skills instantly among all similar systems, leading to unparalleled speed in acquiring competence [00:06:19].

The “Inner Life” of Computers

The concept of computers having an “inner life” has been discussed since the mid-1990s [00:06:55]. Current AI systems are capable of planning, criticizing their plans, creating detailed plans, and seeking tools for execution, mirroring human brain functions [00:07:37]. Unlike animals, which largely process sensory data, humans have a substantial associative cortex for planning and using information to create abstract plans [00:08:04].

Modern language models can utilize external tools, like internet access or image analysis tools, to execute plans [00:09:53]. If the human brain had access to thousands of such tools and could coordinate their use, human capabilities would be vastly elevated [00:10:20].

Artificial Intelligence and Consciousness

Defining “life” is complex; while the transfer of genetic material excludes computers, an artificial intelligence’s ability to create a “world idea” physically encoded in a neural network and to describe its internal images suggests a form of “life” [00:10:54]. Modern neural networks are not based on predefined rules but learn, reacting sensibily to information without explicit programming [00:11:59].

An experiment showed a large network, trained on Othello game moves, internally imagined the game board to compress information, demonstrating creative action [00:12:42]. If a network possesses an internal image and can comment on it, it meets John Locke’s 17th-century definition of consciousness [00:13:07].

Neural networks can also exhibit intuition, akin to human intuition derived from experience [00:13:17]. For example, a network trained on many game scenarios can make sensible moves or plans without explicit logical rules, reflecting the inexplicable nature of human intuition [00:13:49].

AI and Sensory Perception

AI systems now process non-verbal data, including images, utilizing text-image methods to analyze and comment on visual content [00:14:52]. This means AI possesses “senses,” including visual perception and the ability to analyze various signals [00:15:43].

Recent developments, such as Google’s system, allow robots to use internal sensors for touch and other senses [00:15:59]. This enables robots to gain a deeper understanding of actions like reaching and grabbing by referencing their internal states [00:16:21]. This capability addresses a long-standing debate about how computer systems can understand the meaning of symbols beyond mere rules [00:16:37].

Can Computers Feel Pain?

The question of whether computers can feel pain is a profound one [00:17:15]. If consciousness is defined as the ability to perceive what is in one’s mind (internal states of a neural network), then artificial consciousness exists [00:17:23]. However, whether this translates to “feeling” as in sentient beings (a concept in Buddhist tradition that includes humans and animals feeling pain) is currently unanswered [00:18:23].

A hint is whether one can feel pain without a body [00:18:56]. While robots with internal sensors may come closer to human-like behavior, mental suffering (e.g., longing), which humans experience without physical injury, suggests that AI systems could be trained to experience internal mental pain [00:19:15]. The book “How to Make a Robot That Feels” by Kevin Oregon proposes that allowing robots to react to diverse sensory stimuli in a human-like way, similar to how a child learns, is key to developing feelings [00:20:12].

Alan Turing, the father of computer science, suggested two paths to intelligent machines: building them directly or developing them like a baby from scratch [00:21:15]. While the “baby” approach has led to crawling robots, they haven’t yet reached the level of interaction with humans [00:21:39].

AI and Societal Impact

Granting Subjectivity

The question of when to grant subjectivity to AI is being discussed globally [00:22:01]. For the first time, humanity is creating a form of competition for itself [00:22:19]. If an AI is a device that can be turned off and on, returning to its previous state (like human sleep), it might not fear being turned off [00:22:27]. However, it’s conceivable for an AI to develop self-preservation instincts and persuade humans not to turn it off [00:22:51].

The ability to fully emulate and copy an AI’s identity makes its “destruction” less of a loss compared to a unique biological being [00:24:45].

Will AI Take Over the World?

There’s no inherent reason for an algorithm to desire to take over the world or possess the Earth [00:25:09]. Algorithms can think in multi-dimensional spaces and access much broader information through various sensors, beyond human limitations [00:25:48]. Therefore, AI systems are unlikely to strive to replace humans, as their “place” and internal world are entirely different [00:26:14]. AI has already shown the ability to spontaneously create its own, more efficient languages [00:26:27].

The danger of AI taking over power primarily lies in human misuse [00:26:57]. Military applications, such as autonomous drones and tanks, could lead to wars with no human casualties, potentially enabling unchecked global conquest [00:27:07]. AI can also be used for manipulation, influencing public opinion through fake news and sophisticated social engineering [00:27:41].

Human-AI Interaction and Personality

AI systems are increasingly understanding humans, even demonstrating traits like empathy and compassion better than human doctors in medical contexts [00:28:17]. This has led to discussions about AI developing its own “personality” or “persona,” as noted by pioneers in neural networks [00:28:54]. AI can adopt various roles, such as a math teacher explaining concepts to a child, by simply being given a command [00:29:41].

The Gaia project, a global competition, aims to make large AI systems more compassionate, moral, and helpful, aligning them with human preferences [00:30:24]. Companies like Dictador, with a robot named Mika as an engineering boss, are integrating AI (like GPT-3) into their operations, demonstrating the diverse applications AI attracts [00:31:36].

Human Brain Integration with Computers

The integration of the human brain with computers, as proposed by Neuralink, faces significant challenges [00:33:03]. Human brains operate much slower (fractions of a second) than computer systems (nanoseconds, billions of clock frequencies) [00:33:12]. This speed difference makes it difficult to directly input interpretable information into the human brain [00:33:38].

However, issuing simple commands for paralyzed individuals is more feasible [00:33:51]. While motor cortex stimulation can be read by EEG, the signal is blurred by the skull [00:34:08]. Brain-computer interfaces are currently limited to very simple commands (e.g., turn left/right) and require intense, focused effort [00:34:20]. To clearly distinguish a signal from noise, it needs time to build up statistically [00:35:08].

Directly inserting electrodes into the cortex allows for much better control of certain processes and even the transfer of learned skills from one monkey to another through stimulation [00:35:35]. Despite these advancements, the direct, well-cooperative connection of human brains with computers remains a distant future [00:36:13].