This opinion piece by Human Rights Commissioner Lorraine Finlay appeared in The Mandarin on 18 August 2025.
As Australia prepares for the Productivity Roundtable, one question looms large: how do we ensure that the technologies driving future productivity also uphold the values that define us?
Artificial intelligence (AI) is often touted as a solution to Australia's sluggish productivity growth. But without the right guardrails, it risks creating new problems even as it solves old ones. This roundtable is a chance to move beyond theory and commit to practical reforms that ensure AI drives sustainable, inclusive growth.
AI is already shaping recruitment, credit scoring, policing and welfare delivery - areas that directly affect our lives. When these systems are biased or poorly designed, the consequences can be severe: discrimination, exclusion, and erosion of public trust.
Economic growth must be pursued alongside a clear understanding of AI's risk - especially in high-impact settings where human rights are most vulnerable.
The current proposal for mandatory guardrails for high-risk AI is targeted and proportionate. It supports innovation by ensuring appropriate safeguards are in place where the potential for harm is greatest.
The Australian Human Rights Commission has consistently called for a legislative framework that sets enforceable standards for AI. We support an Australian AI Act to establish mandatory guardrails for high-risk applications, prohibit uses that pose unacceptable risks, and require human oversight.
Such a framework would provide clarity and consistency for businesses and regulators - enabling responsible innovation aligned with Australia's values. Without adequate safeguards, we risk undermining the very productivity gains we seek.
For businesses, this is not just about compliance - it's a strategic imperative. Consumers, investors, and employees increasingly expect ethical and transparent practices. Trust is a competitive advantage.
Several high-profile cases have shown what happens when AI-powered products are launched without adequate safeguards. Microsoft's Tay chatbot was taken offline within 24 hours after it began generating offensive content. Google's photo-tagging algorithm misidentified people of colour in deeply inappropriate ways. Amazon's recruitment tool was discontinued after disadvantaging female candidates.
These failures to consider human rights led to public backlash and loss of trust. It damaged reputations and diverted resources from innovation.
By contrast, companies that lead with integrity and foresight can shape the future of responsible innovation. Human rights-centred regulation builds trust, reduces risk, and creates the stable environment businesses need to invest confidently. Voluntary action by industry is important, but it cannot replace enforceable rules that ensure consistency, accountability, and protection - especially when things go wrong.
While new laws should complement existing legislation, the need for a regulatory gap analysis must not become an excuse for inaction. The pace of reform - particularly in areas like privacy law - has been cripplingly slow. By the time consensus is reached, the technology will have already moved on. We need to act now, not wait for a 'perfect' solution.
Some suggest the United States is "letting it rip" on AI, but the reality is more nuanced. While federal movement is limited, many US states are stepping up.
Colorado now requires companies to assess and mitigate risks in high-risk AI systems. In June, Texas passed a comprehensive law mandating transparency and consumer protections, including bans on manipulative techniques. The removal of a freeze on state-level AI regulation from Trump's "Big Beautiful Bill" reflects bipartisan recognition that AI cannot remain unregulated.
An Australian AI Act would be a practical, targeted response to the growing understanding that high-risk AI requires specific safeguards.
As the Productivity Roundtable considers how to unlock Australia's next wave of economic growth, it must also ask: what kind of growth do we want? One that is fast but fragile, or one that is resilient, inclusive, and built on trust?
Embedding human rights into the governance of AI is not a constraint - it's a prerequisite for long-term success. An AI Act would ensure technology serves the public good, not just private interests, and that all Australians benefit from innovation without sacrificing their rights.
We should strive to build a digital Australia that is innovative, inclusive, and rights-focused. Because the future of technology is not just about what we can do - it's about what we choose to do.
Lorraine Finlay is Australia's Human Rights Commissioner