The Australian government suddenly announced a new eight-week consultation to ask how heavily it should police the AI sector.
The Australian government has announced a sudden eight-week consultation that will seek to understand whether any “high-risk” artificial intelligence tools should be banned.
Other regions, including the United States, the European Union and China, have also launched measures to understand and potentially mitigate risks associated with rapid AI development in recent months.
On June 1, Industry and Science Minister Ed Husic announced the release of two papers — a discussion paper on “Safe and Responsible AI in Australia” and a report on generative AI from the National Science and Technology Council.
The papers came alongside a consultation that will run until July 26.
The government is wanting feedback on how to support the “safe and responsible use of AI” and discusses if it should take either voluntary approaches such as ethical frameworks, if specific regulation is needed or undertake a mix of both approaches.
A map of options for potential AI governance with a spectrum from “voluntary” to “regulatory.” Source: Department of Industry, Science and Resources
A question in the consultation directly asks, “whether any high-risk AI applications or technologies should be banned completely?” and what criteria should be used to identify such AI tools that should be banned.
A draft risk matrix for AI models was included for feedback in the comprehensive discussion paper. While only to provide examples it categorized AI in self-driving cars as “high risk” while a generative AI tool used for a purpose such as creating medical patient records was considered “medium risk.”
#AI is already part of our lives. As the technology develops, we need to ensure it meets Australians’ expectations of responsible use. Be part of the @IndustryGovAu discussion, below. https://t.co/Gz11JCXlsG
Highlighted in the paper was the “positive” AI use in the medical, engineering and legal industries but also its “harmful” uses such as deepfake tools, use in creating fake news and cases where AI bots had encouraged self-harm.
The bias of AI models and “hallucinations” — nonsensical or false information generated by AI’s — were also brought up as issues.
The discussion paper claims AI adoption is “relatively low” in the country as it has “low levels of public trust.” It also pointed to AI regulation in other jurisdictions and Italy’s temporary ban on ChatGPT.
Meanwhile, the National Science and Technology Council report said that Australia has some advantageous AI capabilities in robotics and computer vision, but its “core fundamental capacity in [large language models] and related areas is relatively weak,” and added:
“The concentration of generative AI resources within a small number of large multinational and primarily US-based technology companies poses potentials [sic] risks to Australia.”
The report further discussed global AI regulation, gave examples of generative AI models, and opined they “will likely impact everything from banking and finance to public services, education and creative industries.”