From: mk_thisisit

The field of artificial intelligence (AI) is rapidly progressing, but its current state and future prospects present significant challenges and ethical considerations. Experts, such as Professor Marta Kwiatkowska from the University of Oxford, emphasize the need to return to the foundations of AI’s creation and develop new approaches to teaching AI systems [00:00:21].

Defining Artificial Intelligence

Historically, artificial intelligence was introduced over 50 years ago with the idea that it should be a copy of human intelligence [00:02:04]. However, current AI systems are viewed as artificial, but whether they possess true intelligence is debatable [00:02:27]. The term “intelligence” as applied to AI in the media is often misunderstood, as computers currently cannot achieve human intelligence [00:03:02].

Strong vs. Shallow Intelligence

Strong intelligence, as envisioned by some, would be very similar to human reasoning and argument production [00:04:09]. Shallow intelligence refers to decisions that are very easy for AI systems to make [00:04:34].

Current State and Limitations of Artificial Intelligence

Despite significant artificial intelligence progress, current AI systems exhibit notable limitations:

Lack of Human-like Intelligence

The computer is not yet capable of human intelligence, and achieving it will take much longer than 5 to 10 years [00:00:27]. This is a very difficult problem, as humanity still does not fully understand how human understanding and learning work [00:03:27], [00:13:33]. While AI can analyze data, it doesn’t inherently possess intuition or physical thinking [00:13:41], [00:21:34].

Autonomous Systems Issues

Autonomous technologies, such as self-driving taxis, are already in use, but they face significant issues. In San Francisco, autonomous taxis have been reported to stop if the car doesn’t know what to do, sometimes causing dangerous situations by blocking ambulances [00:07:50]. An extreme incident involved an autonomous car driving into the middle of Chinese New Year celebrations, something a human driver would never do [00:08:46]. This highlights the need for AI to behave like a human driver, understanding context and making nuanced decisions [00:09:17].

Consciousness and Statistics

The idea that consciousness is arising in large language models is doubted by experts, who emphasize that these models primarily operate based on statistics [00:10:12]. While Transformers can understand contexts and recognize word meanings, their decisions are still calculated based on word frequencies and complicated features [00:11:09].

Hallucinations in Large Language Models (LLMs)

The release of systems like ChatGPT in November 2022 surprised many, but the subsequent reports on their use highlighted that they are not yet fully developed, particularly due to “hallucinations” [00:11:48]. Hallucinations occur because LLMs search for answers in an artificial, infinite world based on interpolation of finite data [00:26:10]. A potential solution proposed is a two-level system where LLMs handle communication, and a backend system provides accurate information [00:27:05].

Data and Sensory Gaps

AI is mainly fed with image and text data, lacking the spatial understanding, and sensory input (like smell or gestures) that humans use to interpret the world [00:20:09]. To create true copies of human intelligence, all artificial senses would need to be introduced [00:19:31]. The challenge is not just collecting more data, but intelligently using it and incorporating physical models of the world [00:29:50].

The Future of Artificial Intelligence

In the 5-10 year perspective, existing artificial intelligence technologies, such as those used in automatic taxis and language models, have the potential to enter software products [00:05:18]. There is an expectation for greater autonomy in systems, leading to better and safer decisions [00:07:03].

However, achieving AI that thinks and reasons like a human is expected to take much longer than 5-10 years and will require more complex communication signals [00:09:45]. The goal is not necessarily to build an exact copy of human intelligence, but an intelligence that corresponds to specific applications [00:14:27].

Research and Development

Professor Kwiatkowska’s team at Oxford is working on multi-agent AI systems, where agents can perceive and communicate to make predictions, similar to how human drivers anticipate pedestrian behavior [00:34:31]. They also developed a tool called “Prism” for modeling stochastic systems, which involves probabilistic actions [00:38:21].

Challenges and Ethical Considerations in AI Development

The rapid advancements in AI have outpaced society’s readiness and regulatory frameworks [00:17:05].

Regulatory Gaps

A significant challenge is the lack of appropriate global and local regulation, particularly concerning copyright for works used in training AI systems [00:32:21]. The EU AI Act, while strong, is still in the process of implementation across member states [00:33:14].

Miscommunication Across Disciplines

There’s a fundamental misunderstanding in the public debate about AI, partly because different disciplines (mathematicians, physicists, sociologists, psychologists, programmers) use varying nomenclature and understand terms like “learning” differently [00:22:24]. Machine learning, for instance, is primarily learning from data through advanced statistical interpolation, which differs significantly from how humans and children learn [00:23:25].

Data Collection and Synthetic Data

Collecting enough realistic data to reproduce the complexity of the world is a challenge, as it is expensive and subject to privacy restrictions [00:28:24]. While synthetic data can help, its quality and realism are critical for effective training [00:28:42].

Societal Impact and Preparedness

Humanity was not ready for the current pace of artificial intelligence and societal impact [00:17:05]. This unpreparedness is not unique to AI; historically, new technologies like aviation also faced initial periods of unregulated development and accidents before safety systems were introduced [00:17:19]. The integration of autonomous taxis, for example, needs to be understood in the context of the entire social system, where humans and AI operate on the same roads [00:18:05].

Commercialization of Research

Universities like Oxford encourage scientists to consider applications beyond “Blue sky research” [00:41:34]. Researchers can choose to publish tools as Open Source or develop closed-source commercial software. The university provides systems and regulations to facilitate commercialization, though it requires scientists to balance scientific and business work [00:42:19].