The eSafety Commission has raised concerns about the potential for generative AI programs to automate child grooming by predators, as the Australian federal government moves to regulate the fast-growing new technology. Labor MP Julian Hill has urged governments to take action, warning that popular consumer products such as ChatGPT and Bard are “the canary in the coalmine”. Hill has proposed a new federal body in the prime minister’s department to monitor the sphere, stating that “decisions that shape the future of society cannot be left to the private interests of technologists or multinationals alone.”
Artificial intelligence chatbots and image generators such as Dall-E and Midjourney have exploded in popularity in recent months, with leading products funded and developed by tech giants Microsoft and Google. However, critics are concerned about such products, with questions about how they could replace human employees, or be used for misinformation, child exploitation, or scams.
Sam Altman, the CEO of OpenAI, creator of ChatGPT and Dall-E, told a US congressional hearing this week that more regulation was “essential”. Ed Husic, the minister for science and technology, said Australia was among the first countries to adopt a national set of AI ethics principles, with the government mindful of issues around copyright and online safety.
Husic’s spokesperson told Guardian Australia that “AI is not an unregulated area,” and that the government was consulting with a wide range of stakeholders regarding potential gaps and considering further policy. The government received advice on “near-term implications of generative AI including steps being taken by other countries” from the National Science and Technology Council in March. The minister noted that existing copyright laws governed how data was collected by, and used to train, the AI programs, and that privacy and consumer protection laws also applied.
Last week’s federal budget contained $41m for responsible deployment of AI programs. Communications minister Michelle Rowland said AI was also regulated by the eSafety commissioner, the Australian Competition and Consumer Commission, Australian information commissioner, and National AI Centre. She expressed particular concern about “deepfake” intimate images created by AI programs and said the government’s pending review of the Online Safety Act would examine the changing online environment.
The eSafety commissioner, Julie Inman Grant, said her agency had raised concerns about AI-generated image-based abuse since 2020, and was about to begin consultation on a new paper about safety implications and regulation needed for the sector. Inman Grant said eSafety had received complaints about AI regarding child cyberbullying and image-based abuse, but anticipated further problems.
One concerning possibility included predators creating chatbots to contact young people, with concerns about the potential for generative AI “to automate child grooming at scale.” Inman Grant was pleased by AI companies requesting more regulation as their products become popular, but noted: “This skates over the fact that generative AI tools have, in fact, already been released without that step taking place.” She said Australian regulators were already working with international counterparts on developing policy.
Hill, who used ChatGPT to compose a parliamentary speech in February warning AI could be harnessed for “mass destruction”, said lawmakers needed to learn from other jurisdictions. He proposed a new Australian AI Commission to be built “at the centre of government” inside the prime minister’s portfolio, to replace the AI Centre housed within the CSIRO, to bring together industry, public servants, academics, and civil society.
“We are right to worry about uncontrolled generative AI. Imagine unleashing this intelligence with self-executing power, acting without intermediating human judgment,” Hill said.