Artificial Intelligence (AI) has often been portrayed as an existential risk comparable to pandemics. However, one pioneer in the field, Prof Michael Wooldridge, remains unfazed by such concerns. As the presenter of this year’s Royal Institution Christmas lectures, Wooldridge is more worried about AI becoming a controlling and invasive force in the workplace, monitoring employees’ emails, providing constant feedback, and even potentially deciding who gets fired. He finds the existence of such tools disturbing and aims to demystify AI through his lectures.
Wooldridge, a computer science professor at the University of Oxford, believes that the mass-market availability of general-purpose AI tools, such as ChatGPT, has made AI more accessible and relatable. However, he emphasizes that these tools are not magical or mystical. By showcasing how AI technology works in the lectures, he hopes to equip people with a better understanding of AI’s role as a tool, akin to a pocket calculator or a computer.
Joining Wooldridge in exploring AI technology during the lectures will be robots, deepfakes, and other prominent figures in AI research. One of the highlights will be a Turing test, a challenge proposed by Alan Turing. If a human cannot distinguish whether the entity they are conversing with is human or machine, then the machine is said to have demonstrated human-like understanding. While some experts believe the Turing test has not yet been passed, Wooldridge’s colleagues argue that recent advancements in AI have made it possible to generate text indistinguishable from human-produced text.
However, Wooldridge disagrees with the notion that passing the Turing test indicates true artificial intelligence. He believes that the test, although historically significant, is not a great measure of AI. For him, the most exciting aspect of today’s AI technology is its potential to experimentally test philosophical questions, such as whether machines can achieve consciousness. While human consciousness remains largely unknown, Wooldridge suggests that machines with the ability to interact meaningfully with the world may possess a form of consciousness different from human consciousness.
Despite the immense potential of AI in various fields, Wooldridge acknowledges the risks it poses. AI systems can gather personal information from social media feeds, manipulate political leanings, provide inaccurate medical advice, and perpetuate biases present in the data they are trained on. However, he believes that the current technology is unlikely to develop preferences misaligned with human values. To address these risks, Wooldridge emphasizes the importance of skepticism, transparency, and accountability.
While some organizations have warned about the dangers of AI, Wooldridge did not sign their statements. He believes that these warnings conflate near-term concerns with highly speculative long-term concerns. He argues that as long as AI is not given control over lethal systems, it is difficult to see how it could pose an existential risk. Despite his involvement in AI safety initiatives, he does not lose sleep over the potential dangers of AI, as he is more concerned about issues like the Ukraine war, climate change, and the rise of populist politics.
The Royal Institution Christmas lectures, featuring Prof Michael Wooldridge, will be broadcast on BBC Four and iPlayer in late December.