Federal government has responded to AI legislation paper

RMIT

The federal government has today released their response to a consultation paper released last year on safe and responsible AI in Australia.

Experts available:

  • Professor Lisa Given: the government's approach and comparisons internationally

  • Dr Dana Mckay: the potential of AI - good and bad

  • Dr Nicole Shackleton: the lack of consideration of AI use in sex and intimate technologies

  • Nataliya Ilyushina: the costs of delay in regulation

  • Professor Mark Sanderson: the importance of understanding diversity when legislating AI

The government response can be found here. Full expert commentary below.

Professor Lisa Given, Director of the Social Change Enabling Impact Platform and Professor of Information Sciences

"The Australian government appears to be taking a proportional approach to potential risks of generative AI by focusing, at least initially, on application of AI technologies in high-risk settings (such as healthcare, employment, and law enforcement).

"This approach may be quite different to what other countries are considering; for example, the European Union is planning to ban AI tools that pose 'unacceptable risk,' while the United States has issued an executive order to introduce wide-ranging controls, such as requirements for transparency in the use of AI generally.

"However, the Australian government will also aim to align its regulatory decisions with those of other countries, given the global reach and application of AI technologies that could affect Australians directly.

"Taking a proportional approach enables the government to address areas where the potential harms of AI technologies are already known (e.g. potential gender discrimination when used in hiring practices to assess candidate's resumes), as well as those that may pose significant risks to people's lives (e.g. when used to inform medical diagnoses and treatments). Focusing on workplaces and contexts where AI tools pose the greatest risk is an important place to start.

"The creation of an advisory body to define the concept of "high-risk technologies" and to advise government on where (and what kinds of) regulations may be needed is very welcome. It will complement other initiatives that the Australian government has taken recently to manage the risks of AI."

Professor Lisa Given is an interdisciplinary researcher in human information behaviour. Her work brings a critical, social research lens to studies of technology use and user-focused design.

Dr Dana Mckay, Senior Lecturer in Innovative Interactive Technologies, School of Computing Technologies

"AI is affecting more of people's lives than they realise. It affects the search results we get, the healthcare we receive, the jobs we apply for, and how much money we can borrow. In some countries, AI has even been used to determine prison sentences. AI is also a matter of national security.

"When AI can affect our health, wealth and happiness, it is key that it is regulated to ensure personal wellbeing.

"While the negative consequences of AI are large, so are the potential benefits.

"Automating tasks that can be done by machines frees up human capacity and intellect for more complex or human-oriented tasks.

"Ultimately, AI is a tool like any other, and needs principles-based legislation to ensure that it is beneficial for all of Australian society, not just those who benefit most from productivity gains, or those who own the technologies."

Dr Dana McKay studies the intersection of people, technology and information. Her focus is on ensuring advances in information technology benefit society as a whole.

Dr Nicole Shackleton, Lecturer, Law

"The Australian Government's Interim Response to the consultation into the Safe and Responsible Use of AI makes promising steps towards proactive regulation of high-risk AI technologies.

"What is concerning, however, is the lack of consideration of AI use in sex and intimate technologies, which is a growing market internationally and in Australia.

"Other than the Government's focus on AI-generated pornography or intimate images, often referred to as deepfake pornography, which is increasingly being developed and used without consent to bully and harass, the interim report shows little interest in issues of sexual privacy, the safe use of AI in technologies in sexual health education, or the use of AI in sex technologies such as personal and intimate robots.

"It is vital that any future AI advisory body be capable of tackling such issues, and that the risk-based framework employed by the Government does not result in unintended consequences which hinder potential benefits of the use of AI in sex and intimate technologies."

Dr Nicole Shackleton is a socio-legal researcher focused on gender and sex, technology and regulation. Her research aims to inform law reform to prevent online abuse, and the regulation of technology companies.

Nataliya Ilyushina, Research Fellow, Blockchain Innovation Hub

"Australia's unacceptable delay in developing AI regulation represents both a missed chance for its domestic market and a lapse in establishing a reputation as an AI-friendly economy with a robust legal, institutional and technological infrastructure globally.

"The consultation process for responsible AI regulation concluded six months ago. Australia endorsed the Bletchley Declaration at the AI Summit in the UK last November, and EU officials forged a provisional agreement on the world's first comprehensive legislation on AI regulation on the 8th of December.

"The adoption of AI is affordable and accessible, which is particularly essential for the growth of small businesses - the cornerstone of the Australian economy.

"Employing AI to augment human jobs has demonstrated a capacity to enhance productivity, providing a direct solution to Australia's challenges of stagnant productivity growth, the cost-of-living crisis and labour shortages.

"While businesses prefer voluntary codes and frameworks, other stakeholders - especially those working on risks related to cybersecurity, misinformation, fairness and biases - seek more stringent regulations.

"Over-regulation of AI might incentivise businesses to relocate their operations overseas, potentially causing greater job losses than the implementation of AI itself.

"Not having enough regulation can lead to market failure where cybercrime and other risks that stifle business growth, lead to high costs and even harm individuals are high."

Dr Nataliya Ilyushina is a Research Fellow at the Blockchain Innovation Hub and ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at RMIT University. Her work investigates decentralised autonomous organisations and automated decision making, and the impact they have on labour markets, skills and long-term staff wellbeing.

Professor Mark Sanderson, Dean of Research and Innovation, Schools of Engineering and of Computing Technologies

"As smart as AI has become, these computer systems are still prompted and controlled by something smarter, human beings. As important as it is to be concerned about AI algorithms, it is also critically important to monitor how people interact with AI systems and observe how those systems react.

"Across a population as diverse as Australia's, the way people request AI systems to take on tasks will differ widely in both in terms of expression and language.

"Understanding how AI reacts to that diversity of interaction needs to be a critical component of the planned legislation."

Professor Mark Sanderson's research covers search engines, usability, data and text analytics. He is also a Chief Investigator at the RMIT University node of the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S).

***

/RMIT University News Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.