Down Under’s AI Ambitions: Australia Kicks Off Consultative Review to Shape Future of Artificial Intelligence

Australian Government Launches Review of Artificial Intelligence Amid Global Concerns
Down Under's AI Ambitions: Australia Kicks Off Consultative Review to Shape Future of Artificial Intelligence

The Australian government, led by Liberal party leader Anthony Albanese, has launched a review of artificial intelligence (AI) amid growing concerns that the technology could cause irreparable harm to human society in the near future. Industry and Science Minister Ed Husic has released two papers that will kick off an eight-week consultative process to get input from a variety of stakeholders on a new framework. One of the papers is a ‘rapid response’ report commissioned by the National Science and Technology Council (NSTC) that explores the opportunities and risks posed by generative AI. This analysis has become expedient because of the speed at which existing tech companies in Australia and globally are pivoting to AI, and the pace with which AI is seeping into almost every industry.

The paper looks at a variety of similarly disruptive scenarios that AI could foment, including rampant misinformation and subsequent polarization of the population, along with widespread job displacement and increasing inequity. For instance, Australia’s leading medical association has called for regulations that will keep a watch over AI in healthcare — a sector that is vulnerable to racial or age-related bias, with potentially catastrophic consequences. The next stage is a consultation paper that examines what other nations around the world are doing to address AI regulation.

The Australian government is looking closely at an almost identical review that the U.K.’s antitrust agency Competition and Markets Authority announced that it was launching, as well as an AI-specific Act that the European Union is deliberating. The government will also no doubt look at recent actions of the U.S. government, which launched similar public assessments of all major AI generative systems, including by communities of hackers, data scientists, independent community partners, and AI experts.

“Given the developments over the last, in particular, six months, we want to make sure that our legal and regulatory framework is fit-for-purpose, and that’s why we’re asking people, either experts or the community, to be involved in this process, the discussion process, with the papers that we’ve put out, to let us know what their expectations are and what they want to see,” said Husic.

The Australian government’s consultation paper emphasizes that AI still has to operate under the aegis of existing rules in the country, as they currently are in the U.K. or Europe, which range from those that are sector-specific (such as healthcare and energy) or those that are general to all industries (privacy, security, and consumer safeguards). The deliberation will involve a consideration of whether to simply strengthen each sector accordingly or to introduce specific AI legislation, or both. Husic mentioned that if there are “high-risk” areas that emerge from this process — he singled out a hypothetical abuse of facial recognition as an example — that his government will be proactive in addressing these concerns seriously within an emerging framework.

“We want people to be confident that the technology is working for us and not the other way around,” Husic added. “Governments have got a clear role to play in recognizing the risk and responding to it, putting the curbs in place.” It’s a transformation that has triggered increasing concern about AI’s intrusiveness or tendencies toward bias as well as concerns about truthfulness and ‘hallucinations’.

The Australian government’s review is a welcome move towards ensuring that AI is developed and deployed in a responsible and ethical manner. It is essential that governments and stakeholders work together to address the risks and opportunities of AI. As the technology continues to evolve at a rapid pace, it is more important than ever to ensure that AI operates within a framework that prioritizes the interests of society and individuals.