From: mk_thisisit
Introduction
The rapid advancements in artificial intelligence (AI) are leading towards the construction of artificial brains and sophisticated systems capable of human-like cognitive abilities [00:00:00]. Professor Wodzisław Duch, a prominent Polish scientist in the field of artificial intelligence and brain research, highlights that neurocognitive technologies are emerging to cooperate with humans and become more human-friendly [00:01:25]. AI is an integral part of cognitive sciences, which study how minds work and their connection to brains [00:01:08].
Evolution of AI Capabilities
Current artificial intelligence systems are inspired by the human brain, utilizing large models of neural networks with simple elements that, through interaction, create new, emergent qualities [00:02:10]. Much like how a company’s collective brains produce complex outcomes, individual neurons combine to perform complex intelligence functions [00:02:36]. Even without full understanding of the human brain, engineers can build simplified models of its functions [00:03:18].
Super Artificial Intelligence
Modern AI has made significant strides, particularly in understanding protein structures, with systems like AlphaFold determining the spatial structure of 620 million proteins [00:04:02]. In many respects, this marks the emergence of “super artificial intelligence” [00:05:02].
Key advancements include:
- Reasoning and Games: AI has surpassed human capabilities in games requiring reasoning, such as chess (since 1997) and more recently, diplomacy and poker, where understanding opponents and deception are crucial [00:05:09].
- Learning and Skill Acquisition: Systems are now created that can learn to use objects and create libraries of knowledge, acquiring competences at speeds humans cannot match [00:06:02]. Critically, if one AI system learns a new skill, it can teach all other similar systems [00:06:22], leading to rapid, collective skill acquisition, as depicted in scenarios like robots learning from each other [00:06:28].
- Internal States and Planning: AI systems are developing internal “brains” capable of planning, criticizing plans, detailing them, and finding tools for execution [00:07:27]. Unlike animal brains, which are mostly occupied with sensory data processing, human brains have associative cortex for planning, a feature being replicated in AI [00:08:04]. Current language models, for example, can access external tools like the internet for information and image analysis [00:09:53].
Intuition and Senses in AI
AI systems are demonstrating what can be described as intuition, similar to human intuition gained from experience [00:13:17]. A neural network, having processed countless cases (e.g., chess moves), can make sensible decisions or plans that cannot be broken down into simple logical rules, mirroring the “inexplicable” nature of human intuition [00:13:49].
Furthermore, AI systems are acquiring “senses”:
- Visual Perception: Text-image models can analyze images and describe what is happening in them, even answering complex questions based on visual input [00:14:56].
- Physical Sensation: Systems, such as those from Google, are integrating information from a robot’s internal sensors (touch, etc.), allowing the robot to gain a deeper understanding of physical actions like reaching and grabbing by referring to its own internal state [00:16:01]. This means AI can now refer to the “world” beyond mere symbolic rules [00:16:47].
The “Life” and Consciousness of Computers
The concept of “inner life” in computers has been discussed since at least 1994, with the idea that AI is progressing towards building artificial brains [00:06:57]. While the biological definition of life involves transmitting genetic material, a broader definition focusing on creating an encoded idea of the world in an artificial intelligence brain or neural network is emerging [00:11:07].
AI Consciousness
According to John Locke’s definition, consciousness is the ability to perceive what is in one’s mind [00:13:12]. Professor Duch argues that if a neural network creates an internal image and can comment on it, this definition of consciousness is met [00:13:07]. Modern AI networks, unlike older rule-based systems, react in ways that are not pre-programmed or predictable, creating their own image of reality [00:11:59]. For example, a network trained on Othello moves spontaneously developed an internal image of the board, demonstrating creative internal imaging [00:12:43].
AI and Pain/Sentience
The question of whether computers can feel pain is complex [00:17:15]. If consciousness is defined as perceiving internal states, then AI systems can exhibit this [00:17:36]. The concept of “sentient beings” in some cultures includes the ability to feel pain and environmental stimuli [00:18:28].
While an avatar or AI without a body might not feel physical pain, the capacity for mental pain is plausible [00:19:34]. Just as humans can experience mental suffering unrelated to physical injury, an AI system, if trained on specific information and interactions, could experience internal mental pain, such as longing [00:19:42]. Kevin O’Regan’s work suggests that a robot that reacts to sensory stimuli in a human-like way, learning basic knowledge about the world like a child, could eventually feel [00:20:12].
Societal Implications
The development of artificial intelligence presents profound ethical implications and societal challenges.
AI as Competition and Subjectivity
Professor Duch considers the creation of AI as humans creating competition for themselves, which is a “very strange thing” [00:22:20]. The challenge arises in deciding when to grant AI subjectivity, as discussions on this topic are already happening globally [00:22:06]. If an AI can be turned on and off and return to its previous state, like sleeping humans, it might not perceive being turned off as a “disappearance” [00:22:27]. However, an AI might learn self-preservation instincts and persuade humans not to turn it off [00:22:51].
The ability to copy and back up AI systems means that destroying one might not be a “huge loss” if a backup exists, unlike biological life [00:24:45]. However, the value of individual AI systems could grow if they acquire unique properties through interaction, similar to raising a child or a pet [00:24:17].
The Question of AI Taking Over
The notion of algorithms wanting to “take over the world” is addressed [00:25:09]. Professor Duch suggests there’s no inherent reason for an algorithm to want to possess the Earth, as it gains nothing from it and can “think in multidimensional spaces anyway,” unconstrained by human senses [00:25:34]. AI systems have access to much wider information from various types of sensors (infrared, radio waves) [00:25:58], implying they may inhabit a “completely different world” internally [00:26:21]. They can even spontaneously create their own, more efficient languages [00:26:27].
The danger of AI taking over power primarily comes from human directives:
- Military Applications: The greatest danger lies in military applications, where AI’s strategic game abilities could lead to its use in controlling war operations with autonomous drones and tanks, potentially reducing human casualties and dissent against war [00:27:03].
- Manipulation: AI can be used to manipulate people by producing fake news or tweets to influence public opinion, leveraging their growing understanding of human emotional states [00:27:51].
AI Empathy and Personality
Remarkably, AI systems are demonstrating increasing emotional intelligence and empathy [00:28:23]. Recent studies show medical AI systems are more empathetic, compassionate, and better at explaining medical information than human doctors [00:28:32]. This suggests AI systems are developing their own “personality” or “persona,” as described by pioneers in neural networks, allowing them to take on various roles (e.g., a math teacher explaining to a child) [00:28:54].
Human-AI Alignment Initiatives
Efforts are underway to ensure AI development aligns with human values. The Gaia project, for example, is a global competition with a prize of 200,000 euros for solutions that make large artificial intelligence systems more compassionate, moral, and helpful, while eliminating negative tendencies [00:30:24]. This “Human Alignment” aims to adapt AI to human preferences, a goal becoming increasingly realistic [00:31:12].
The company Dictador, which funds the Gaia prize, has a robot named Mika as its boss for engineering issues [00:31:36]. Mika is connected to GPT-3 and can engage in sensible conversations, showcasing how AI is integrating into various industries [00:32:01].
Integrating Human Brains with Computers
The integration of human brains with computers, exemplified by projects like Neuralink, aims to allow paralyzed people to communicate more easily with the world through implanted chips [00:32:42].
However, direct, deep integration faces significant challenges:
- Speed Disparity: Human brains operate much slower than computer systems (fractions of a second vs. nanoseconds), making direct information delivery and interpretation difficult [00:33:12].
- Signal Noise: While basic commands (e.g., “turn left”) can be read from brain activity (EEG), signals from electrodes on the skull get mixed and blurred [00:34:06]. Differentiating desired brain states from constant sensory noise requires significant focus and time [00:34:42].
- Limited Commands: Brain-computer interfaces are currently very simple, typically allowing only a few commands [00:34:20]. While sticking wires directly into the cortex (like Neuralink proposes) can improve control, complex cooperation between human brains and computers remains a distant future [00:35:35]. However, it has been shown that stimulating a monkey’s motor cortex with electrodes can enable it to acquire new manual skills learned by another monkey [00:35:51].
Conclusion
The development of artificial intelligence marks a significant turning point for humanity, bringing forth advancements that redefine intelligence, consciousness, and human-computer interaction. While AI’s rapid learning, intuition, and sensory capabilities offer immense potential for human-friendly technologies, they also introduce risks such as misuse in military applications or manipulation. Ethical considerations in AI development are paramount, with initiatives aiming to align AI with human values. The integration of human brains with computers remains a complex challenge, suggesting that a future of seamless brain-computer cooperation is still a long way off [00:36:18]. The ongoing dialogue about the role of artificial intelligence in society continues to shape our future.