Sam Altman, CEO of OpenAI, is on a global tour to raise awareness about the potential existential risks posed by artificial intelligence (AI) systems. Altman, along with hundreds of other tech leaders, is calling for governments to establish guidelines for responsible development of AI over the next decade. The group has issued a statement asserting that mitigating the risk of extinction from AI should be a global priority on par with other societal-scale risks such as pandemics and nuclear war. Altman is leading the charge to create “artificial general intelligence” (AGI), an AI system capable of tackling any task a human can achieve. While he acknowledges the dangers of unchecked AI development, he also argues that the benefits of developing “superintelligence” are so great that we should risk the destruction of everything to try to do it anyway.
Altman’s tour has been a grueling few weeks, with stops in Paris, London, Oxford, Warsaw, and Munich. He is meeting with world leaders, scientists, and business-focused student societies to discuss the limits of AI systems and how the benefits should be shared. Altman believes that hearing as many perspectives as possible is a priority over his team’s plans for the day.
During his tour, Altman has faced criticism from those who believe that OpenAI’s focus on AGI distracts from addressing the potential harm the company’s products are already capable of causing. OpenAI has clashed with Italy over ChatGPT’s data protection, and Altman spent several hours being harangued by US senators over everything from misinformation to copyright violations.
Altman and his colleagues have published a note outlining their vision for an international body modeled on the International Atomic Energy Agency to coordinate between research labs, impose safety standards, track computing power devoted to training systems, and eventually restrict certain approaches altogether. Altman has been surprised by the interest from senior politicians and regulators in knowing more about what that might look like.
Altman acknowledges that there are different timescales for addressing the challenges posed by AI. He believes that the challenges of sophisticated disinformation and cybersecurity are very important, but OpenAI’s particular mission is about AGI. Altman believes that the only way to get safety is with “capability progress” – building stronger AI systems to better understand how they work.
While Altman’s position may be controversial, he is committed to having conversations about the risks and benefits of AI development. He believes that a global regulatory framework for superintelligence is necessary to ensure responsible development and mitigate the risk of extinction from AI. Altman’s tour is an effort to convince world leaders that AI risk needs concerted international effort.
In conclusion, Altman’s tour is an effort to raise awareness about the potential risks and benefits of AI development. While he acknowledges the dangers of unchecked AI development, he also believes that the benefits of developing superintelligence are so great that we should risk the destruction of everything to try to do it anyway. Altman and his colleagues have called for the establishment of a global regulatory framework for superintelligence to ensure responsible development and mitigate the risk of extinction from AI.