Call for Transparency on NDIS Auto Plans

Joint Statement by Disability Representative Organisations on 'Computer-Generated NDIS Plans' and changes to review pathways emphasising the need for high standards of scrutiny

Friday 5 December

The Guardian's recent reporting on proposals to introduce computer-generated NDIS plans appears to signal a substantial change in the way participant budgets may be determined. Disability Representative Organisations (DROs) emphasise that reforms require the highest standards of scrutiny, transparency and safeguards to ensure they do not undermine the rights and experiences of people with disability.

Our concerns about the use of automation in New Framework Planning are compounded by changes to review pathways, which appear to narrow the grounds on which decisions can be challenged and limit the scope of the Administrative Review Tribunal (ART), which cannot vary a participant's plan and can only trigger a reassessment by the original decision-maker. This would significantly limit participants' ability to correct errors or challenge flawed assumptions. At the same time, there is currently no clear guarantee that written evidence provided by participants will be considered in these processes. We are deeply concerned that these changes, combined with increasing automation, will create significant risks, particularly for people with the highest support needs.

People with disability, their families, and advocates have long raised concerns about the lack of transparency and accountability in how the NDIS makes decisions.[1] Reference to 'Computer-generated' NDIS plans indicates Automated decision-making (ADM) - the use of computer systems to automate all or part of an administrative decision-making process. Artificial intelligence (AI) is a broad term referring to an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming.

Automated Decision Making (ADM) and Artificial Intelligence (AI) systems are only as reliable as the information fed into them. If historical data has under-represented people with a psychosocial disability, intellectual disability, complex communication needs, people with fluctuating, multiple or complex disabilities, First Nations people, and other intersectional and marginalised communities, ADM will reproduce and amplify those gaps. This is well-established across other sectors: algorithms built from "average" cases consistently fail those whose experiences sit at the margins - including women, First Nations people, culturally and linguistically diverse communities, and people with disability.[2]

Computer-generated decisions also cannot explain how they reached a conclusion, what assumptions were prioritised, or whether the model was designed to minimise cost or standardise prices across vastly different geographic, service contexts and thin markets. People cannot meaningfully challenge a decision if they cannot see or understand how it was made. When this opacity is combined with weaker review rights, participants face the real risk of being unable to contest flawed assumptions. These risks are heightened for the many people who already face significant barriers in navigating the NDIS or the ART, including people with an intellectual disability or those without access to informal supports, advocacy or legal representation.

This opacity is particularly concerning in Australia, which continues to lack a comprehensive legal framework regulating the use of AI and ADM in public administration. Without regulation, there is no requirement for algorithms to be transparent, reviewable, or accountable. Issues such as privacy, data integrity, system resilience, and the risks associated with commercial AI providers remain unresolved.

The needs assessment and new planning framework needs to be meaningfully co-designed with the disability community and their representative organisations. This includes commitments to transparency, regular meetings and clarity about timelines and when feedback and ideas can influence legislation, Rule-making and implementation.

DROs call on the NDIA to:

  • Publicly disclose current Agency proposals on where ADM or AI will be used in New Framework Planning - including in the budget allocation process - how it operates, the datasets it relies on, the degree of human oversight and capacity for positive intervention.
  • Provide public information and dedicated community briefings on every stage of the Needs Assessment and New Planning Framework, including how the Support Needs Assessment will be used to develop a budget.
  • Publish any legal advice about the reviewability of new framework plans at the Tribunal, and ensure all aspects of the New Framework Planning rules are explainable, can incorporate all relevant information, and that the plan budget is capable of being meaningfully challenged at internal and external review.
  • Partner with DRO and Disability Representative Carer Organisations (DRCO) to jointly agree on a strategy for co-design of all aspects of New Framework Planning and Rules development - including any proposed use of automation - by end of 2025.

Footnotes

[1] The fact that 73% of the 7,132 cases brought appealing NDIA decisions in the 12 months to June 2025 succeeded in changing the decision, supports these concerns.

[2] Gender bias: Nadeem, A., Marjanovic, O. ., & Abedin, B. . (2022). Gender bias in AI-based decision-making systems: a systematic literature review. Australasian Journal of Information Systems, 26. https://doi.org/10.3127/ajis.v26i0.3835

Cognitive bias: Rastogi, Charvi, et al. "Deciding fast and slow: The role of cognitive biases in ai-assisted decision-making." Proceedings of the ACM on Human-computer Interaction 6.CSCW1 (2022): 1-22.

Kadiresan, A., Baweja, Y., Ogbanufe, O. (2022). Bias in AI-Based Decision-Making. In: Albert, M.V., Lin, L., Spector, M.J., Dunn, L.S. (eds) Bridging Human Intelligence and Artificial Intelligence. Educational Communications and Technology: Issues and Innovations. Springer, Cham. https://doi.org/10.1007/978-3-030-84729-6_19

About our organisations

This statement was developed by DROs with coordination support from Disability Advocacy Network Australia (DANA) in their role as the National Coordination. DROs are funded by the Department of Social Services (DSS) to represent people with disability.

The following organisations have contributed to and/or expressed their support for this joint position statement:

FINAL DRO joint statement composite logos

The following organisations have contributed to and/or expressed their support for this joint position statement:

  • Australian Autism Alliance
  • Australian Federation of Disability Organisations
  • Children and Young People with Disability Australia
  • Community Mental Health Australia
  • Disability Advocacy Network Australia
  • Down Syndrome Australia
  • Inclusion Australia
  • National Mental Health Consumer Alliance
  • People with Disability Australia
  • Physical Disability Australia
  • Women With Disabilities Australia
/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.