Wednesday 10 December
PWDA was pleased to respond to the Department of Social Services Automation Artificial Intelligence Ethics Framework (AAI).
We participated in a roundtable to develop Australia's AI Plan, have made submissions and participated in a range of Federal and State consultations on digital strategy and the application of AI in government services.
PWDA supports the submission by Inclusion Australia that raises concerns about the reliance of the AAI on internal governance processes rather than independent audit, public reporting or a public register of AAI systems. There are not sufficient safeguards built into these systems to protect people, create oversight or accountability when automated systems get things wrong.
Our submission raises a number of issues with AAI.
Vulnerable to error
AAI systems are 'vulnerable to hallucination and error'. Automated systems are only able to follow a set framework and large language models referred to as artificial intelligence (AI) are only able to infer/ forecast the next word/action/number based on a probability calculation from past data.
In addition to the potential for error from automation or inference, AI tools are not able to differentiate between what's true and what's not true making them vulnerable to error and posing difficulties for ethical regulation.
AI based computing vulnerability
It would be prudent to view any Australian system built on an AI framework as 'vulnerable' to collapse given that the largest player Open AI made a USD$11.5 billion loss last quarter. AI requires 100+ times the investment of capital than it returns in revenue.
Nvidia and Open AI together with the other tech giants most exposed to AI (Alphabet, Amazon, Meta, Tesla, Apple and Microsoft) are highly likely to experience a crash within the 12 months. Any Australian system built based on these platforms risks failure or significant price increases, when the company contracted runs out of capital.
Undermining Australia's Climate and Human Rights Obligations
In addition to consuming large amounts of capital and energy to operate, AI servers require huge volumes of water to cool them, and the industry depends on exploitative labour practices to run that could reasonably be described as modern slavery. This undermines any notion of ethics, as well as Australia's climate and human rights obligations.
Systemic vulnerabilities in the AAI Ethics Framework
Nowhere in the AAI Ethics Framework does it talk about the subjects of this decision making (extract below of the model from page 7). 'Users' are the staff using the system, but no mention is made of protection for the people who these decisions are being made about.

It is difficult to see how the monitoring framework will operate effectively if it is based on random sampling of outputs. AI is notoriously inconsistent in the outputs it produces from even the exact same prompts, and its creators are unable to explain why. Auditing a sample of outputs will not necessarily be generalisable to other outputs, due to the inconsistency of AI decision making.
It is unclear how Services Australia will be able to achieve meaningful technical standards for the government's use of AI, when the creators of AI are currently unable to explain how it works.
We recommend the introduction of regular expert human audit of issues raised that negatively impact customers, especially those from marginalised communities, and evaluation of the decisions made in order to improve AAI performance.
PWDA recognises that there are opportunities for the deployment of automation and large language models in government services, but the current AAI framework does not do enough to protect the interests of the Australian people intended to be the subjects of the decisions it makes. We recommend mandating greater transparency when AAI is used, requiring human review and action to address customer complaints, and co-design of algorithms and processes with people with disability.
Recommendations
- We recommend the introduction of regular expert human audit of issues raised that negatively impact customers, especially those from marginalised communities, and evaluation of the decisions made in order to improve AAI performance.
- We recommend mandating greater transparency when AAI is used, requiring human review and action to address customer complaints, and co-design of algorithms and processes with people with disability.