New AI Standard Backs Responsible Government Use

DTA

The Digital Transformation Agency has released the Australian Government's AI technical standard. A new resource to help agencies embed transparency, accountability, and safety in their use of artificial intelligence across the public sector.

The Australian Government AI technical standard (the Standard) is the next complimentary piece to the DTA's existing suite of AI guidance for the Australian Public Service (APS).

The standard will support agencies in delivering high quality services through their use of AI. This is to ensure agencies maximise the benefits they deliver to the community.

"The DTA has strived to position Australia as a global leader in the safe and responsible adoption of AI, without stifling adoption," explains Lucy Poole, General Manager of Digital Strategy, Policy and Performance, DTA. "We believe the comprehensive lifecycle approach we've taken, combined with the flexibility to go above and beyond, complements the broader suite of AI resources available to the APS"

"Our technical standard was developed with extensive research of international and domestic practices - and comprehensive consultation with the APS, "continues Ms Poole. "The standard is designed with public trust front of mind."

It sets out technical requirements for AI systems across their full lifecycle. From initial design through to monitoring and potential decommissioning. The standard applies to a range of delivery models. This includes in-house systems, vendor solutions, pre-trained AI model solutions and managed services.

"The AI technical standard isn't about adding more processes to its users. It's designed to integrate with what agencies already do," adds Ms Poole. "It allows agencies to embed responsible AI practices into existing governance, risk and delivery frameworks."

Inside the Standard

The standard provides requirements and recommendations following three key phases of the AI system life cycle: Discover, Operate and Retire. The practices described at each phase in the lifecycle ensures the system is ethical, effective, and aligned with regulation from inception to decommission.

Under Discover, AI systems are conceptualised, designed, and prepared for deployment. The standard highlights the following elements for the systems to meet high quality thresholds.

  • Design: Define the system's purpose, objectives, and scope. This includes ethical risks, biases, fairness, government policies, human oversight and accountability structures.
  • Data: Identify the data make-up for building and using the system and ensure quality, privacy, and security measures are implemented. Apply governance practices to maintain compliance and manage AI bias.
  • Train: Creation, adaption and selection of algorithms and models for the AI, as well as their calibration, training, and context.
  • Evaluate: Evaluate the accuracy, reliability, and robustness of the AI. Conduct adversarial testing to identify risks and ensure compliance with guidelines.

The Operate phase implements the AI system while establishing processes for ongoing oversight and performance tracking. The standard's approach suggests the following elements under this phase.

  • Integrate: Embed the AI into the enterprise ecosystem and ensure compatibility with platform, based on user needs with safeguards against unintended outcomes.
  • Operate: Securely launch the AI system and prevent unauthorised access. Provide documentation and ensure compliance with regulations.
  • Monitor: Track performance continuously during operation. Detect biases, data drift, and unforeseen issues. AI should be updated accordingly to operate as required with audit logs.

The final phase, Retire, occurs when an AI system is no longer needed, it ensures responsible retirement.

  • Decommission: Phase out the system in a controlled manner, complying with data retention policies. Assess risks and provide transition plans if necessary.

"At every stage of the AI lifecycle, the Standard helps agencies keep people at the forefront, whether that's through human oversight, transparent decision-making or inclusive design." continues Lucy Poole.

Next steps

Agencies are encouraged to begin applying the AI technical standard to guide their development and use of current and future AI systems. The DTA will continue to work with agencies to embed the standard into existing risk, delivery, and assurance processes.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.