From: mk_thisisit
Professor Marta Kwiatkowska, the first woman in the history of the University of Oxford to receive the title of professor of computer science and the first woman in Great Britain to receive the Milner Prize, discusses the complexities and societal impact of artificial intelligence, particularly in the context of autonomous vehicles [00:00:00]. She emphasizes the need to return to the foundations of AI’s creation and develop a different approach to teaching artificial intelligence systems [00:21:00], [00:35:00].
Defining Artificial Intelligence
Historically, artificial intelligence was conceived over 50 years ago with the idea that it should be a copy of human intelligence [02:04:00]. However, Professor Kwiatkowska notes that what is observed today is artificial intelligence, but the “intelligence” aspect is not fully understood [02:23:00]. The term “intelligence” as used in media, often in reference to human intelligence, is misleading [03:02:00]. The computer cannot yet achieve human intelligence; this is a very difficult problem that is not yet fully understood [03:24:00].
Headlines discussing “conscious artificial intelligence” or “strong artificial intelligence” are prevalent [03:37:00]. Strong artificial intelligence is understood as intelligence that produces arguments and derivations very similar to those of a human [04:04:00]. This is distinct from “shallow intelligence,” which refers to very easy decisions [04:31:00].
Current State and Challenges of Autonomous Vehicles
While AI technology in automatic taxis and language models has the potential to integrate into software products, the current situation is limited [05:18:00]. For instance, cars like Tesla are not truly automatic because the human driver remains responsible for incidents [06:00:00]. However, technology significantly aids in tasks like parking, stopping assistance, and sign recognition [06:16:00].
Professor Kwiatkowska highlights the current limitations with autonomous taxis:
- Uncertainty and Stopping: Autonomous taxis in San Francisco stop if the car “doesn’t know what to do,” leading to protests [07:55:00]. This “safe stopping” can be dangerous, blocking emergency vehicles like ambulances [08:07:00].
- Lack of Contextual Understanding: An autonomous car was burned in San Francisco’s Chin District after it drove into Chinese New Year celebrations, an act a human driver would typically avoid [08:41:00]. This demonstrates the current lack of nuanced understanding that human drivers possess [09:03:00].
- Sensory and Intuitive Limitations: Humans communicate and understand context not only through words and sight but also gestures and intuition [18:59:00]. Many human senses, such as smell, are very difficult for AI to replicate [19:22:00]. For AI to truly copy human intelligence, all artificial senses would need to be introduced [19:28:00]. AI is primarily fed with image and text data, which currently limits its ability to create a spatial image of the world with all laws of physics and chemistry [20:09:00]. Additional sensors, like distance sensors, are needed for autonomous vehicles [20:51:00].
Predictions and Misconceptions about AI Development
Professor Kwiatkowska is optimistic that problems with AI can be solved [10:00:00], but achieving human-like intelligence will take much longer than 5-10 years [00:31:00], [09:45:00]. She doubts that consciousness is arising in large language models [10:05:00], emphasizing that these models still rely on statistics [10:32:00], [11:24:00].
[!WARNING] The success of OpenAI in November 2022 was surprising due to its publication, but the system’s current limitations, such as “hallucinations,” were not surprising [11:48:00]. These models are trained on vast databases, including the entire internet, but their answers are based on pattern matching, not true understanding [12:45:00]. Human understanding and learning processes are still largely unknown [13:31:00].
The goal doesn’t necessarily have to be an exact copy of human intelligence; an intelligence that corresponds to the specific application is sufficient [14:24:00]. The greatest challenge isn’t about collecting more and more data, but rather about using data intelligently and incorporating physical models of the world [25:05:00], [29:48:00].
Societal Readiness and Regulation
Humanity was not ready for the rapid development of artificial intelligence [01:15:00], [17:05:00]. This unpreparedness is not surprising, as every new technology faces similar issues [17:19:00]. Just as regulations for flying airplanes were introduced after accidents in the 1920s, current AI technology, particularly in autonomous taxis, lacks appropriate regulation [17:29:00]. A key challenge is integrating autonomous taxi use with the existing social system, especially with human drivers on the same roads [18:05:00].
A fundamental problem in the public debate on artificial intelligence is the different perspectives and terminologies used by various fields (mathematicians, physicists, sociologists, psychologists, programmers) [22:21:00]. For example, “learning” for a psychologist differs significantly from “machine learning” for a programmer, which is essentially learning from data via advanced statistical interpolation [22:44:00]. Human learning involves mental models and repetition, which is not fully replicated by current language models [23:47:00].
Ethical and Regulatory Considerations
Professor Kwiatkowska suggests that current AI development should not be collectively halted in specific areas globally, as regulations vary by country [31:52:00]. However, a major global problem is the lack of appropriate copyright and regulation for training, developing, testing, and verifying artificial intelligence systems [32:21:00].
The EU’s AI Act, while strong and regionally binding, is not yet universally enacted [32:58:00]. Japan has a law that requires companies to collect data on AI usage, including energy consumption, which serves as a starting point for analysis to determine safety [33:39:00].
Future Research in AI and Autonomous Systems
Professor Kwiatkowska’s research at Oxford focuses on models of artificial intelligence systems, particularly multi-agent systems [34:29:00]. This involves developing agents that use perception mechanisms, often neural networks, and communicate with other agents [35:05:00]. An example is programming an automatic taxi to make predictions about pedestrian behavior, such as whether a pedestrian will cross the road, similar to how human drivers anticipate situations [35:47:00].
Her work includes developing a tool called “Pryzm” for modeling stochastic systems, which involves probabilistic actions [37:40:00]. This tool, developed over 20 years with her students, recently received the prm test of tool Award [38:42:00].
Regarding the commercialization of scientific projects, universities like Oxford do not impose obligations to commercialize research [41:23:00]. However, governments encourage scientists to consider applications alongside blue-sky research [41:27:00]. While some researchers publish their tools as open source, others may choose to keep their software closed source for commercialization, with university support systems in place [41:46:00].