From: mk_thisisit
Defining Artificial Intelligence
Historically, artificial intelligence (AI) was conceived over 50 years ago with the idea that it should be a copy of human intelligence. This is why the term “artificial intelligence” was used [00:02:04]. However, what is being developed today, while artificial, does not necessarily possess intelligence in the human sense [00:02:23]. The term “intelligence” as used in media often refers to human intelligence, which computers currently cannot achieve [00:03:13]. This is a very difficult problem that is not yet fully understood [00:03:27].
Some define “strong intelligence” in AI as intelligence that is very similar to arguments a person produces, capable of deriving conclusions, unlike “shallow intelligence” which involves easy decisions [00:04:04].
AI and Consciousness
Newspaper headlines and even some scientific texts discuss conscious artificial intelligence or “strong artificial intelligence” [00:03:39]. The head of OpenAI reportedly strives for strong, conscious artificial intelligence [00:03:45]. However, whether consciousness is starting to arise in large language models is doubted by Professor Marta Kwiatkowska, who believes it is an illusion based on how these models are trained [00:10:05]. Current models still operate based on statistics [00:10:32].
Limitations of AI Compared to Humans
The current computer is not yet capable of human intelligence, and it will take much longer than 5 to 10 years to achieve this [00:00:27]. A different approach to teaching artificial intelligence systems is needed [00:00:35].
Understanding and Learning
One significant difference lies in “understanding.” While AI models can recognize the meaning of a word according to context, they still work statistically, calculating decisions based on word frequency and features [00:11:09]. Their answers are based on pattern matching, not true understanding [00:12:59]. Understanding is considered an element of human intelligence [00:13:17], and how human understanding and learning work is not yet fully known [00:13:31]. Children, for example, learn completely differently than language models [00:13:52]. Machine learning, specifically, is learning from data, which is essentially interpolation – a very advanced statistical process, but not true understanding [00:23:25]. Humans learn through repetition and develop mental models of everything, which language models currently lack [00:23:54].
Sensory Input and Intuition
Humans communicate not only with words but also with sight, gestures, and even smell [00:19:02]. An automatic taxi driver, for instance, does not operate with all senses, making complex problems like recognizing smell very difficult to solve in AI [00:19:15]. To create true copies of human intelligence, all these artificial senses would need to be introduced, allowing machines to communicate with us similarly to how humans interact [00:19:28].
Humans perceive the world spatially, understanding length, depth, width, and intuitively time [00:19:52]. AI, mainly fed image and text data, may not have enough data to create a spatial image of the world with all the laws of physics, chemistry, and mathematics [00:20:09]. While technologically combining 2D images for a spatial view is an “easy technological problem,” it’s about the use of additional senses and sensors [00:20:33]. Data alone is not enough; for example, a car needs to check if a recognized speed sign makes sense based on additional information, mimicking a human reflex [00:21:07].
Intuition, which is a resultant of data analysis acquired through all our senses, is another aspect AI struggles with. Psychologists and neuroscientists are exploring this, and collaboration with computer scientists is needed to teach AI in a way that allows it to solve problems children can but AI cannot yet [00:21:34].
Hallucinations in LLMs
Large Language Models (LLMs) can “hallucinate” because they look for answers in an artificial, infinite world calculated through interpolation of finite data [00:26:06]. Effectively fighting these hallucinations while maintaining creativity is a current challenge [00:26:33]. One proposed solution is a two-level system where LLMs are used for communication with a backend that handles the core logic [00:27:05].
Current State and Future of AI
The current state of AI technology, seen in automatic taxis or language models, has the potential to enter software products [00:05:18]. However, many business models still rely on advertising rather than software products [00:05:39]. Even “automatic cars” like Tesla are not truly autonomous, as the driver remains responsible [00:06:00]. Despite this, technology already significantly assists in parking, stopping assistance, and sign recognition [00:06:16]. This type of progress is expected to slowly integrate into software and systems that connect necessary functions intelligently [00:06:36].
We can expect greater autonomy from future systems, leading to better and safer decisions [00:07:03]. However, current autonomous taxis in San Francisco demonstrate limitations; they stop when the car doesn’t know what to do, which can be dangerous, blocking emergency services or causing protests [00:07:50]. An extreme example involved an autonomous car driving into Chinese New Year celebrations, something a human driver would never do [00:08:41]. An automatic car should ideally behave like a human driver [00:09:15].
The development of artificial intelligence will take much longer than 5-10 years, requiring more communication signals [00:09:45]. Professor Kwiatkowska is optimistic that these problems can be solved [00:10:00].
Progress in artificial intelligence is not about simply having more and more data [00:25:05]. Instead, it’s about intelligently using existing data, energy, and integrating physical models of the world [00:25:23]. The goal is to develop a different approach to teaching artificial intelligence systems so they can learn like children and reason with a physical model of the world [00:30:09]. The aim is to achieve artificial intelligence that thinks and reasons like a human [00:31:05].
Societal Readiness
The world was not ready for the rapid development of artificial intelligence [00:01:15]. This is common with new technologies; for example, early airplane flight in the 1920s was unregulated until accidents led to safety systems and regulations [00:17:19]. Similarly, regulations for autonomous taxis are still lacking [00:18:00]. Understanding how to integrate the use of automatic taxis into the existing social system, especially with human drivers on the same roads, is crucial [00:18:09].
Areas for Further Research
Computer scientists need to collaborate more with psychologists and neuroscientists to understand how to teach artificial intelligence [00:21:52]. A fundamental mistake in the public debate on AI is the differing perspectives and nomenclature across fields (e.g., mathematics, physics, sociology, psychology, programming) [00:22:21]. For instance, a psychologist’s understanding of “learning” differs significantly from a programmer’s concept of “machine learning” [00:22:44].
Creating synthetic data is an important area, especially for training generative networks [00:28:00]. While synthetic data can be easily produced and helps overcome the expense and privacy restrictions of collecting real data, its quality and realism are critical [00:28:24].
Reproducing the world for AI also poses challenges because humans themselves do not fully understand many laws of physics [00:29:14]. The problem is not about having an infinite number of images but about using data more intelligently to train systems [00:29:42].
Ethical and Regulatory Considerations
While some suggest areas where artificial intelligence development should be stopped, similar to bans on human cloning or nuclear tests in space, such decisions are currently local, varying by country [00:31:12]. For example, Germany has a different approach to privacy [00:32:09].
The biggest current problem is the lack of appropriate global and local regulation regarding copyright and the use of author’s works for training, developing, testing, and verifying artificial intelligence systems [00:32:18]. The EU AI Act is a strong regional law, but it still needs to pass through parliaments in all European countries [00:33:09]. Some existing laws, like in the UK, mandate companies to collect data on AI use, including energy consumption, as a starting point for safety analysis [00:33:39].
Research at Oxford
Professor Marta Kwiatkowska’s team at Oxford is working on multi-agent artificial intelligence systems [00:34:31]. This involves developing agents that use perception mechanisms (e.g., automatic cars seeing via neural networks) and communicate with other agents [00:35:05]. For example, an automatic car could use a system to predict in real-time whether a pedestrian on the road’s edge intends to cross, which is a common human driver behavior [00:35:47]. The goal is to program automatic taxis to understand such situations and act accordingly, like slowing down [00:36:19].
The professor’s group developed a tool called “Pryzm” for modeling stochastic systems, particularly for probabilistic actions, which they have worked on for over 20 years [00:37:40]. This tool received the Prm Test of Tool Award [00:38:47].