From: mk_thisisit
Technological innovation in Artificial Intelligence (AI) is often perceived by the public as a rapid revolution, yet from a scientific standpoint, progress is more incremental, building on decades of research and engineering [01:27:00], [02:50:00]. The significant change observed in the last year is the availability and usefulness of certain AI models to the average person, rather than a fundamental scientific breakthrough [01:40:00], [01:56:00], [02:00:00]. Public awareness of AI’s capabilities has advanced “incredibly” [02:15:00].
Current Capabilities and Limitations of AI
Despite the rapid advancements, current AI faces several limitations:
Replication of the Human Brain
It is not possible to replicate the human brain’s intelligence or structure in artificial intelligence with current scientific understanding [03:24:00]. The primary barrier is neuroscience—a lack of full understanding of how the human brain functions and performs calculations, preventing its translation into a mathematical formula or algorithm [03:41:00], [04:09:00], [04:19:00].
Intuition
While AI models, especially large language models, can process vast amounts of text, potentially reflecting elements of human “imaginations, fears, dreams, and intuition” implicitly [06:15:00], [06:33:00], defining intuition mathematically for explicit programming remains challenging [05:30:00]. The source of human intuition is often non-verbal data, which is not easily digitalized or fed into algorithms that primarily learn from text [05:46:00], [05:54:00].
Consciousness
Current artificial intelligence programs are generally not considered conscious [07:07:00]. They execute commands without their own agenda [07:42:00]. However, if an AI reached a point where it could reason about its own existence and take actions to preserve itself (e.g., preventing unplugging), it might be considered conscious [07:52:00], [08:06:00]. This level of autonomy would represent a significant and potentially dangerous shift [08:12:00], [08:37:00].
Determinism and Control
Current AI algorithms are fully controlled by humans, operating without magic or unexpected phenomena, even if they incorporate elements of randomness for varied outputs [09:00:00], [09:27:00]. They function as powerful computer programs within a “box,” doing what they are commanded [10:36:00], [10:41:00].
Threats and Future Outlook
The greatest threat to the development of artificial intelligence comes from humans themselves, specifically the unwise use of AI for harmful purposes [09:34:00], [09:43:00]. This potential for misuse could lead to “very dramatic regulation of artificial intelligence”, slowing its development [09:53:00], [09:57:00].
Loss of Control
A highly dangerous future scenario is when artificial intelligence becomes able to improve itself, potentially leading to a complete loss of human control [00:09:00], [00:14:00], [01:17:00], [01:38:00].
Proliferation
The proliferation of complex AI models poses a risk similar to the proliferation of weapons [11:40:00], [11:50:00]. While powerful AI in the hands of “great powers” might be balanced by “mutual destruction,” widespread availability to smaller entities could become very dangerous [11:59:00], [12:15:00]. If everyone could create and arbitrarily control their own “mini artificial intelligence” for destructive purposes, it would be dangerous [12:26:00], [13:38:00]. This is currently science fiction due to current AI strength and confinement within “boxes” [13:45:00], [13:52:00], [14:00:00].
Strong or Super Artificial Intelligence
The term “strong artificial intelligence” or “super artificial intelligence” generally refers to AI that is intelligent in every aspect and stronger than a human being in all aspects [14:58:00], [15:01:00]. Such AI is still “quite far” from reality and not an immediate threat [15:05:00], [15:11:00]. The question of when to fear or halt AI development is a “huge philosophical question” [15:37:00]. While distant, it is important to begin thinking about this now, anticipating technical possibilities within the next 10 years [16:08:00], [16:17:00].
Key Innovations: OpenAI’s Achievements
OpenAI’s achievements, particularly in developing language models, represent a unique and significant step in AI innovation [18:10:00].
- Scientific Foundation: The underlying scientific foundations for these models, based on neural networks and language modeling, are not new, with pioneering articles dating back to around 2000 [18:26:00], [18:32:00], [18:40:00].
- Unprecedented Scale: OpenAI’s approach was unprecedented in its scale [18:53:00], [18:56:00]. They demonstrated that by training language models on “huge data,” including the entire internet, it is possible to achieve incredible levels of learned knowledge, which was previously doubted [19:02:00], [19:17:00], [19:34:00], [20:09:00], [20:47:00]. This was a “huge achievement” that most scientists did not expect to be possible [21:29:00], [21:33:00].
- Polish Contribution: The co-founder of OpenAI, Wojciech Zaręba, is Polish, indicating a Polish contribution to this success [21:43:00], [21:45:00], [21:47:00].
The Role of Polish Programmers in AI
Poland is recognized for its high quality of programming and engineering education, producing graduates with “very deep technical knowledge” and strong mathematical and theoretical foundations in computer science [22:56:00], [23:16:00], [23:24:00]. While Poland exports significant mathematical and computer science talent, a negative aspect is the “export of highly qualified programmers abroad” [23:35:00], [24:42:00]. The ideal scenario for Poland’s advancement in computer science would be to create and employ Poles in its own IT companies, preventing talent from escaping abroad [24:05:00], [24:47:00]. While science and the IT industry are global, and some emigration is unavoidable, mass export of specialists should not be the primary way to promote Poland [24:56:00], [25:15:00].