19 January 2024

AI might need some mandatory guardrails, government says

| Chris Johnson
Join the conversation

AI can be good, but there must be safeguards. The government is responding to concerns. Photo: File.

The Federal Government is considering mandatory guardrails for AI development to help ensure the technology is safe and used responsibly. The safety steps would also apply to the deployment in high-risk settings.

Releasing the government’s interim response to the Safe and Responsible AI in Australia consultation, Industry and Science Minister Ed Husic said it was clear that while AI has immense potential to improve wellbeing and grow the economy, Australians want stronger protections to help manage the risks.

“Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled,” Mr Husic said.

“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.

“We want safe and responsible thinking baked in early as AI is designed, developed and deployed.”

READ ALSO Working from home is the APS water cooler chat right now

The government has sought views on mitigating any potential risks of AI and supporting safe and responsible AI practices.

Mandatory guardrails being considered relate to testing of products to ensure safety before and after release; transparency regarding model design and data underpinning AI applications; labelling of AI systems in use and/or watermarking of AI-generated content; and training for developers and deployers of AI systems, which could include possible forms of certification and clearer expectations.

Consultation on possible mandatory guardrails is ongoing, but Mr Husic said some immediate actions are already being taken.

These include working with industry to develop a voluntary AI safety standard and options for voluntary labelling and watermarking of AI-generated materials.

They also include establishing an expert advisory group to support the development of options for mandatory guardrails of accountability for organisations developing, deploying and relying on AI systems.

Australia is not the only country looking at ways to mitigate any emerging risks of technologies such as AI. Some jurisdictions favour voluntary approaches, while others are pursuing more rigorous regulations.

Mr Husic said Australia was closely monitoring how other jurisdictions are responding to the challenges of AI, including initial efforts in the EU, the US and Canada.

He said building on its engagement at the UK AI Safety Summit in November, the government will continue to work with other countries to shape international efforts in this area.

The consultation discussion paper says AI is already improving many aspects of people’s lives, but the speed of innovation in AI could pose new risks. This creates uncertainty and gives rise to public concerns.

“Australia has strong foundations to be a leader in responsible AI,” the discussion paper states.

“We have world-leading AI research capabilities and are early movers in the trusted use of digital technologies.

“Australia established the world’s first eSafety Commissioner in 2015 to safeguard Australian citizens online and was one of the earliest countries to adopt a national set of AI Ethics Principles.

“This consultation will help ensure Australia continues to support responsible AI practices to increase community trust and confidence.”

READ ALSO All right, stop! Collaborate and listen – and then save, save, save

The government’s interim response is targeted towards AI in high-risk settings where it believes harms could be difficult to reverse.

It wants all low-risk AI use to continue to flourish without impediments.

“In broad terms, what we’ve wanted to do is get the balance right with the work that we’ve undertaken so that we can get the benefits of AI while fencing up the risks and looking at realising that a lot of the way in which AI is used is low risk and beneficial,” Mr Husic told ABC Radio.

“What we want to do is get some of the best minds, through an expert advisory panel, to basically define those areas of high risk that will require a mandatory response, and that we also spell out what the consequences potentially are for not doing so.

“Our preference is to be able to work with industry and other people that are interested in this space, to be able to get a uniform, cooperative approach to this.

“And that’s why we’re staging it, developing a voluntary safety standard initially, and then scaling and laddering that up to mandatory guardrails longer term.”

Join the conversation

All Comments
  • All Comments
  • Website Comments

Won’t need Bruce Pascoe to write fake history now. The “moral truth” brigade will be pumping it out by the petabyte. That, along with anything else that fits the elite narrative will be allowed from AI, while everything else — especially anything that disputes the elite’s moral narratives — will be smeared as dastardly misinformation, which you’ll be encouraged to believe all derives from Russian AI bot farms. The AI scare will suit these political manipulations to a tee, playing into the hands of the cultural elite perfectly.

I see you set the ChatGPT parameters to “archconservative rant”, Rustygear. It worked – ChatGPT delivered.

Thanks for proving my point. If it’s not elite-approved woke narrative, it must be AI-generated misinformation. People like JS here will use this ruse over and over again in their quest for total cultural domination.

But I didn’t call your AI-generated archconservative rant misinformation, Rustygear. To call it misinformation would be to credit it with being “information” – albeit false or misleading. There was nothing informative, it was just ranting drivel.

You seem pretty upset JS, that not everyone bows down to your totalising ideology. Sucks huh?

CaptainSpiff11:41 am 20 Jan 24

@JS You’re sounding a bit shrill… Maybe try something other than insults and ad hominem attacks?

As any halfway competent observer would note, government authorities use available means to shape narratives, advance their own agenda, and suppress critics. This applies to both left and right wing governments BTW. But apparently you’ve never noticed?

“… government authorities use available means to shape narratives, advance their own agenda, and suppress critics …” I don’t disagree that governments do try to shape narratives and advance their own agenda – presumably it’s how and why they got elected. But “suppress critics”? When have your citicisms been suppressed? The mere fact that, on here, we are able to openly disagree on all manner of issues belies that theory.

JS, it just means your mob, the woke elite, haven’t yet been able to totally suppress views you don’t like. But you’re working on it, right?

Daily Digest

Want the best Canberra news delivered daily? Every day we package the most popular Riotact stories and send them straight to your inbox. Sign-up now for trusted local news that will never be behind a paywall.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.