Australia's AI Safety Gap Vastly Underestimated

University of Queensland

The public expects airline-grade safety. Experts say we're nowhere close.

Imagine boarding a plane knowing that 5 Dreamliners crash every day; 1 every 4 hours. You wouldn't fly. Nobody would.

Yet this is the level of risk Australians currently face from artificial intelligence (AI), according to expert forecasters. Australians expect better.

Our new survey of 933 Australians reveals a stark disconnect: 94 per cent expect AI systems to meet or exceed the safety standards of commercial aviation.

Commercial flights achieve a 1-in-30-million risk of death per flight; about 150 deaths per year.

Expert assessments of AI risk? At least 4,000 times higher. AI experts say it's 30,000 times higher.

This isn't just an academic concern. The gap between what Australians expect and what experts assess creates a trust crisis that threatens the technology adoption our government considers essential for economic competitiveness.

The Productivity Commission argues AI-specific regulation should be a "last resort."

Their logic makes sense: poorly designed rules could slow innovation and squander productivity gains. Better to rely on existing frameworks, like privacy laws, consumer protections, and anti-discrimination legislation.

But the Commission's approach faces 3 problems.

First, trust in AI developers is very low. Only 23 per cent of Australians trust technology companies to ensure AI safety.

When we asked what stops people using AI, lack of knowledge wasn't the top barrier.

Privacy concerns topped the list at 57 per cent, followed by "I don't trust the companies making AI" at 32 per cent. The government wants to avoid regulation that stops adoption, but the public don't trust self-regulated AI.

Second, Australians think the government will under-regulate, not over-regulate. When forced to choose, 74 per cent worry the government won't regulate AI enough. Only 26 per cent fear over-regulation. And 83 per cent believe regulation already lags behind the technology.

When asked whether government should prioritise managing risks or driving innovation, 72 per cent chose risk management.

Third, the safety gap is enormous. Expert forecasters estimate catastrophic AI risks between 2 per cent and 12 per cent by 2100.

These are people with proven track records in rigorous prediction tournaments.

They fear AI could enable (or even create) biological weapons, nuclear strikes, or threats we haven't even considered.

These risks mean AI researchers and company CEOs estimate catastrophic risks between 2 per cent and 25 per cent. Taking even the lowest of these estimates, AI poses risks far exceeding what the public will tolerate.

Even if regulation slows AI down, Australians are willing to wait.

Most (80 per cent) would support a 10-year delay in advanced AI development to reduce catastrophic risk from 5 per cent to 0.5 per cent.

Even 50-year delays receive majority support. Half wouldn't accept even a 1 per cent catastrophic risk in exchange for solving climate change and extending lifespans by 20 years.

So, over-regulation is not a barrier to AI adoption. It's a lack of trust stemming from a big gap between public safety expectations and current reality.

Other countries are moving beyond light-touch approaches.

The European Union implemented technology-specific regulation in 2024.

The UK, USA, and South Korea established AI Safety Institutes. California now regulates the most advanced AI models. These safeguards recognise that autonomous systems capable of pursuing goals create unique risks that consumer laws don't cover.

Australia could follow this path.

The government recently announced plans for an AI Safety Institute would align us with international partners.

Mandatory safety testing for frontier systems, independent audits, incident reporting, and whistleblower protections would bring Australia in line with emerging standards. Our survey shows 9 in 10 Australians say these safeguards would increase their trust.

Aviation has aviation-specific regulation. Nuclear has nuclear-specific regulation. Pharmaceuticals have pharmaceutical-specific regulation.

The public expects AI to be no different.

Adoption will increase not through education campaigns about AI's benefits, but because the technology becomes genuinely trustworthy.

That's how AI stops being a threat the public barely tolerates and becomes a technology they actually want to use.

Read the full report of the Survey Assessing Risks from Artificial Intelligence (SARA) 2025.

Dr Michael Noetel is an Associate Professor in the School of Psychology at The University of Queensland.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.