A new report from the University of East Anglia (UEA) warns that the potential reputational damage of charities using AI-generated images in their campaigns is more complex than many organisations realise.
It comes as humanitarian budgets tighten and production pressures increase, with many charities and NGOs turning to AI tempted by the offers of speed, cost efficiency and creative flexibility.
The study suggests the charity and development sector's "high-tech shortcut" to empathy is backfiring. While AI offers a cheaper, faster way to produce campaign visuals, it risks breaking the fundamental bond of trust between charities and the public, say the authors.
The report, Artificial Authenticity, analysed 171 AI-generated images and more than 400 public comments surrounding campaigns from 17 organisations, including Amnesty International, Plan International, the World Health Organization (WHO) and WWF.
The findings reveal a worrying shift - that when AI images are used, the humanitarian cause effectively disappears from the conversation. The researchers found the introduction of AI fundamentally reshapes how the public engages with charity.
Co-author David Girling, from UEA's School of Global Development, said: "Charities exist because people care about other people. The moment when audiences start questioning whether what they are seeing is real, the emotional connection that drives support is put at risk.
"The debate about the ethics of AI is increasingly polarised. AI is not inherently wrong, but if it begins to overshadow the human story at the heart of charitable work, organisations could lose far more in trust than they gain in efficiency."
Key findings from the study, published today, include:
Nearly 70 per cent of the AI images analysed were designed to appear photorealistic. Poverty was the dominant theme, accounting for around a third of the images (51 of 171), and often featuring children, followed by environment (35) and human rights (32) themed images.
While 85 per cent of images were appropriately captioned as AI generated, this disclosure did not protect the cause and organisations from backlash, even when transparently labeled.
In undisclosed campaigns, the audience adopted an "investigative tone." Instead of evaluating the charity's work, commenters focused entirely on whether the images were artificial or not.
The report also found significant public backlash against "message-medium misalignment". For example, environmental organisations like WWF Denmark faced criticism for using energy-intensive AI tools to promote sustainability, an irony not lost on a climate-conscious public who labeled the move "ecocidal"
For some organisations, mock visuals are seen as a way to balance storytelling with safeguarding and dignity. Therefore, using AI-generated imagery could reduce the number of people who would have been otherwise re-traumatised by the process of being photographed or filmed for campaign purposes. However, the study shows that donors often reject these "fake" images, prioritising their own need for an "authentic witness" over the beneficiary's right to privacy.
The researchers found the public response was far from simple. In some cases, people welcomed AI as a way to protect vulnerable individuals from exploitation. In others, they criticised it as a distraction from real solutions, particularly in emotionally sensitive campaigns such as cancer or famine.
When AI is used, discussion often shifts away from the cause and towards debates about technology and trust. Of the comments analysed: 141 focused on AI ethics and authenticity concerns, not the charitable cause; 122 critiqued technical execution and visual quality; only 80 (less than 20 per cent) actually engaged with the humanitarian issue itself.
Co-author Deborah Adesina, a former Master's student in the School of Global Development and now a media, communications and development consultant, said: "Ultimately, the future of charity storytelling will not hinge on technological capability alone. It will depend on whether organisations can maintain legitimacy, transparency and moral coherence in an environment where audiences are increasingly media literate and increasingly sceptical.
"For communications teams who opt to include generative AI in their workflow, proper training in ethical prompt engineering will be crucial to avoid reputational harm and unintended bias."
The study, Artificial Authenticity: The Rise of Images Generated by Artificial Intelligence in Charity and Development Communications, maps current practice and offers practical recommendations for charities, fundraisers and sector leaders navigating this rapidly evolving ditigal landscape.
These include working with technology providers and AI companies to develop charity-sector-specific AI tools with built-in bias detection, stereotype alerts, and ethical guardrails tailored to humanitarian representation.
In addition, if choosing to use AI-generated imagery, organisations should co-create it with local communities by involving them in the creative process, including generating AI prompts and approving final imagery to ensure they are accurate and culturally appropriate.
The full report and the database of AI-generated charity images are available at www.charity-advertising.co.uk .