Experts are warning Australia's charity sector to be judicious in its use of Artificial Intelligence or risk losing public trust.
The University of Queensland's Dr Kim Weinert and Adjunct Associate Professor Brydon Wang examined the adoption and governance of AI by charities in Australia, finding the benefits for the sector come with particular risks.
"The charity sector is under ever-increasing pressure, so understandably any intervention that can save time, labour or money is very appealing," Dr Weinert said.
"Charities are not only using AI for administration, communications, fundraising and compliance tasks but also increasingly to prioritise where their resources should go.
"This means algorithms are directly participating in decision-making about client access to support."
Dr Weinert said this could be problematic.
"Charities can exercise power over people who depend on their support and who may not have the capacity to challenge decisions or seek help elsewhere," she said.
"Even though algorithmic criteria seem to be neutral, they can indirectly discriminate against a person or group already experiencing disadvantage.
"There is a danger vulnerable people or groups can be excluded from help or harmed by AI-driven mistakes.
"Furthermore, charity governance in Australia is built on fiduciary duties designed for human decision-makers, so when technology participates in or drives key decisions instead of people, the chain of accountability can break down."
Dr Wang said charities ultimately operate only if they earn and maintain the public's trust, but trust is often confused with trustworthiness.
"Trust is a willingness to be vulnerable to someone else's actions while trustworthiness is demonstrated through signals that show you deserve that trust," Dr Wang said.
"Good intentions and mission statements only go so far - and charities shouldn't assume trust already exists simply because the vulnerable communities they serve may lack viable alternatives.
"When deploying AI systems the key question is therefore not whether an organisation is trusted, but whether it is acting in a trustworthy manner."
The researchers propose a trustworthiness-based framework, developed by Dr Wang, to help charities assess whether the use of AI is justified.
Proposed trustworthy AI framework
Benevolence - the AI system must serve the interests of the end-users and beneficiaries, not just organisational efficiency. This is fundamental to the reason charities exist.
Integrity - AI use must align with the values of the community they serve and the purpose of the charity, while meeting legal and human rights obligations.
Ability - only AI systems the charity understands and oversees should be used and when it cannot be meaningfully supervised, choosing not to deploy it may demonstrate "ability" through restraint.
Dr Weinert said the proposed framework re-directs decision making in the charitable sector.
"AI might offer increased productivity and efficiency, but human rights, public benefit and accountability should remain at the centre of any consideration about technology," she said.
"We are urging charities which use AI to be deliberate, cautious and transparent so it benefits their operations and those they help."
Dr Weinert is from UQ's Law School while Dr Wang is an Adjunct Associate Professor in UQ's Centre for Policy Futures .
The full White Paper can be accessed on the Centre for Policy Futures website.