The recent US Senate hearings on artificial intelligence (AI) have raised concerns among observers. The hearings brought to mind Karl Marx’s quote about history repeating itself, “the first time as tragedy, the second as farce”. However, this time around, it seems the opposite is true. The previous farce of the Meta (formerly Facebook) CEO explaining to a senator that his company made money from advertising has been followed by the tragedy of senators questioning Sam Altman, the new face of the tech industry.
Why is this a tragedy? As one of my children, revising O-level classics, once explained to me: “It’s when you can see the disaster coming but you can’t do anything to stop it.” The trigger moment was when Altman declared: “We think that regulatory interventions by government will be critical to mitigate the risks of increasingly powerful models.” Altman went on to suggest that the US government “might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities”. He believed that companies like his can “partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes that develop and update safety measures and examining opportunities for global coordination.”
Some observers saw Altman’s testimony as big news, with a tech boss finally admitting that the industry needs regulation. However, less charitable observers, like this columnist, see two alternative interpretations. One interpretation is that Altman’s proposal is an attempt to consolidate OpenAI’s lead over the rest of the industry in large language models (LLMs), because regulation often enhances dominance. The other interpretation is that Altman’s proposal is an admission that the industry is already running out of control, and that he sees bad things ahead. So his proposal is either a cunning strategic move or a plea for help, or both.
Whenever a CEO calls for regulation, it’s usually a sign that something’s up. Meta, for example, has been running ads for ages in some newsletters saying that new laws are needed in cyberspace. Some of the cannier crypto crowd have also been baying for regulation. Mostly, these calls are pitches for corporations – through their lobbyists – to play a key role in drafting the requisite legislation. Companies’ involvement is deemed essential because – according to the narrative – government is clueless. As Eric Schmidt – the nearest thing tech has to Machiavelli – put it last Sunday on NBC’s Meet the Press, the AI industry needs to come up with regulations before the government tries to step in “because there’s no way a non-industry person can understand what is possible. It’s just too new, too hard, there’s not the expertise. There’s no one in the government who can get it right. But the industry can roughly get it right and then the government can put a regulatory structure around it.”
The industry’s next strategic ploy will be to plead that the current worries about AI are all based on hypothetical scenarios about the future. However, this is baloney. ChatGPT and its bedfellows are – among many other things – social media on steroids. And we already know how these platforms undermine democratic institutions and possibly influence elections. The probability that important elections in 2024 will not be affected by this kind of AI is precisely zero.
Furthermore, as Scott Galloway has pointed out in a withering critique, it’s also a racing certainty that chatbot technology will exacerbate the epidemic of loneliness that is afflicting young people across the world. “Tinder’s former CEO is raising venture capital for an AI-powered relationship coach called Amorai that will offer advice to young adults struggling with loneliness. She won’t be alone. Call Annie is an ‘AI friend’ you can phone or FaceTime to ask anything you want. A similar product, Replika, has millions of users.” We’ve all seen those movies – such as Her and Ex Machina – that vividly illustrate how AIs insert themselves between people and relationships with other humans.
In his opening words to the Senate judiciary subcommittee’s hearing, the chairman, Senator Blumenthal, said this: “Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment. The result is: predators on the internet; toxic content; exploiting children, creating dangers for them… Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.” The only thing wrong with the senator’s stirring introduction is the word “before”. The threats and the risks are already here. And we are about to find out if Marx’s view of history was the one to go for.
Sources:
1. Capitalist punishment: Will AI Become the New McKinsey? by Ted Chiang in the New Yorker
2. Founders keepers: The Cult of the Founders by Henry Farrell on the Crooked Timber blog
3. Superstore me: The Dead Silence of Goods by Adrienne Raphel in the Paris Review.