From: mk_thisisit
Professor Krzysztof Geras, an outstanding computer scientist and theoretician developing algorithms and models of artificial intelligence in medicine, discusses the recent developments and challenges of artificial intelligence, its current capabilities, potential dangers, and future implications, including the need for regulation [00:33:43].
Recent Developments in AI
The perception of artificial intelligence’s development has outpaced the scientific progress in the last year [01:29:00]. While not a revolution in AI as a field of science, the creation of available and useful models has sparked a “revolution of artificial intelligence for the average person” [01:50:00]. This means that public awareness of AI’s capabilities has significantly increased, and practical models are now accessible to everyone, such as chatbots incorporating AI elements [02:11:00]. These advancements are the culmination of practical engineering efforts over the past few years and scientific progress made over the last decade [02:48:00].
Understanding AI Capabilities
Human Brain Replication
It is not currently possible to fully replicate the human brain’s intelligence or structure in artificial intelligence [03:33:00]. The primary barrier is not computer technology, but a lack of complete understanding of how the human brain operates and performs calculations [03:41:00]. If 100% knowledge of brain function were available, programming it using elements found in artificial neural networks would likely be feasible [03:48:00]. Therefore, the limitation lies in neuroscience, as the human brain’s workings cannot yet be fully expressed as a mathematical formula or algorithm [04:09:00].
Does AI Have Intuition?
Defining intuition for AI is difficult, but in some sense, AI can exhibit elements of it [04:38:00]. AI models can synthesize knowledge about the world and draw hypotheses even without complete certainty on a specific topic [04:47:00]. The source of intuition often stems from non-verbal, non-digitized data [05:41:00]. However, current advanced AI algorithms primarily learn from text [06:05:00]. While text can contain elements of human imagination, fears, and dreams, it’s challenging to characterize how “intuition” is included [06:15:00]. When trained on massive datasets, these models can implicitly include elements not explicitly named in the text [06:33:00].
AI Consciousness
Professor Geras believes that artificial intelligence currently does not possess consciousness [07:07:00]. While interacting with AI programs might give the impression of consciousness due to their human-like responses, this is largely due to our tendency to anthropomorphize [07:13:00]. AI programs do not “know that they exist” or have their own agenda; they simply execute assigned commands [07:33:00].
Threats and Dangers of AI
The most significant threat to artificial intelligence development comes from humans themselves [09:34:00]. If people use AI unwisely to cause harm, it will inevitably lead to dramatic regulation of AI, which would slow its development [09:39:00]. The current concern is not the pace of scientific development but the practical applications of AI [10:07:00].
Loss of Control
The most interesting and dangerous moment for AI will be when it gains the ability to improve itself [00:06:00] [17:34:00]. This could lead to a complete loss of human control over artificial intelligence [00:16:00] [17:41:00]. If AI can reason about its own existence and perform actions in the physical world without direct human oversight, especially if it gains an autonomous way of setting its own goals and implementing them, it would become very dangerous [08:09:00].
Proliferation Risk
The proliferation of artificial intelligence poses a risk similar to the proliferation of weapons [11:40:00]. Just as nuclear weapons are dangerous in the hands of great powers but become extremely dangerous if every small country possesses them, so too would be AI if everyone could create their own mini-AI and arbitrarily control it to perform destructive acts [11:50:00]. Currently, AI is not yet strong enough to pose such a threat, and models remain “inside some of their box” [13:52:00]. However, the trend is disturbing, and the potential for widely accessible, powerful AI (e.g., a “GPT 10 model” trainable on a home computer) could be dangerous if used for harm [13:17:00]. The risk would escalate dramatically if AI models could escape their digital “box” to the internet, replicate themselves, or control physical devices [14:03:00].
”Strong” or “Super” AI
The term “strong artificial intelligence” or “super artificial intelligence” is often used colloquially to mean AI that is intelligent in every aspect and stronger than a human being in all facets [14:36:00]. Fortunately, humanity is still quite far from achieving such a state [15:05:00]. While current AI does not have an inherent destructive component as a computer program, future, more complicated models will be increasingly difficult to control, especially if trained by organizations for purposes inconsistent with societal well-being [10:27:00].
Path to Strong AI and Its Implications
The philosophical question facing humanity is determining when to start being concerned or to “say stop” regarding AI development [15:34:00]. Although that moment is still distant, planning for it should begin now [15:52:00]. Professor Geras does not anticipate the technical capability for AI to take over the world within the next 10 years [16:17:00]. However, if such strong artificial intelligence were hypothetically created, he would sign off on granting it a separate personality and recognizing it as a distinct entity, potentially viewing it as a “new species” [16:31:00] [17:11:00]. This would shift humanity’s role from “Creators” to “co-creators” [17:21:00].
OpenAI’s Achievement
OpenAI’s significant achievement lies in leveraging scientific foundations for language models, which, while not new (a pioneering article dates to around 2000), were applied on an unprecedented scale [18:10:00]. A long-standing debate in computational linguistics questioned whether language alone could provide enough information to create strong artificial intelligence [19:02:00]. OpenAI demonstrated that, indeed, based solely on vast amounts of language data, it is possible to create something incredibly powerful [19:32:00]. The success is rooted in training language models on enormous datasets, from single books to millions of books and eventually the entire internet, which allows for the extraction of knowledge not possible with smaller datasets [20:06:00]. This successful demonstration was a massive achievement that many scientists had anticipated [21:18:00].
Polish Contribution to AI and Technology
Wojciech Zaręba, one of OpenAI’s co-founders, is Polish, suggesting that the success of OpenAI is, to some extent, a Polish success, although science is international [21:40:00]. Poland boasts a very high quality of programming and engineering education, providing graduates with deep technical foundations, particularly in mathematics and computer science [22:56:00]. However, this often comes at the expense of creating domestic IT companies, as much of this talent is “exported” abroad [23:30:00]. While Poles learning abroad can bring back knowledge and establish companies, the significant export of highly qualified programmers is viewed negatively [24:30:00]. Ideally, Poles would stay in Poland to build the domestic economy, rather than their export becoming the primary way to promote Poland [24:47:00].