New Zealand Child Protection Strain: Can Predictive Tech Aid?

Across child protection services, frontline staff are often making decisions in the hardest possible conditions: under time pressure, with incomplete information and high stakes on every side.

Author

  • Dylan A Mordaunt

    Research Fellow, Faculty of Education, Health, and Psychological Sciences, Te Herenga Waka — Victoria University of Wellington; Flinders University; The University of Melbourne

Get it wrong and the consequences are serious. A child may remain in danger. Or a family may be disrupted unnecessarily, with harms of its own.

There is also a triage problem. Some families need urgent intervention. Some need support. Some need monitoring. And some need less intrusion, not more.

In practice, those judgements already rely on reading signals from fragmented information and, in effect, making predictions about risk.

Predictive modelling aims to make that process more systematic. By analysing patterns in large administrative datasets, it can help identify which children may be most at risk of future harm.

With New Zealand's social workers under more strain than ever, what are the opportunities of using these tools more actively - and what are the potential dangers?

NZ and predictive analytics

New Zealand is no stranger to predictive modelling, nor debate surrounding it.

More than a decade ago, it was among the first countries to seriously explore how predictive modelling could be applied to child protection.

Work led by Professor Rhema Vaithianathan and colleagues at the Auckland University of Technology showed that integrated administrative data could identify newborn children at elevated risk of later maltreatment.

Still, agencies have been deliberately cautious in framing how these models might be used.

The Ministry of Social Development has said they should enhance intake decisions, support rather than replace professional judgement and first be tested in a simulated setting. A Statistics New Zealand peer review echoed that point: a model should trigger closer assessment, not automatic intervention.

Steps to move from research to practice have nonetheless proved contentious.

A proposed 2015 observational study - which would have assigned risk scores to newborns and tracked outcomes - was ultimately halted amid concerns about privacy, bias and the role of the state.

While these concerns have not disappeared, neither has pressure on the system. Oranga Tamariki received more than 55,000 reports of concern in the second half of 2024 - a sharp increase on the previous year.

Recent internal surveys of the agency's frontline staff meanwhile highlight how cases are becoming more complex and that decisions are being made under uncertain conditions.

Predictive modelling tools, however, are still not used by those workers. To date, testing of the technology has been carefully limited to historical, anonymised data - and carried out alongside extensive ethical, privacy and Māori-led reviews.

Promise and pitfalls

Where predictive modelling has been piloted in the United States, post evaluations have suggested it can help if used carefully.

In Pennsylvania's Allegheny County , for instance, one pilot programme resulted in fewer children being removed from their homes. In another in Los Angeles , cases where children suffered life-threatening harm was observed to fall by 23%.

This suggests that models can add more precision to interventions. But it hasn't always been the case.

Authorities in Illinois abandoned one system after it produced too many alerts. It was also criticised for missing cases that resulted in tragedy, despite the children already known to child welfare agencies.

This demonstrated that if a model overwhelms workers with data it can simply add clutter instead of reducing harm.

Another risk facing frontline workers is what are called "false negatives", such as missed cases, and "false positives", such as wrongful accusations.

The former can mean a child remains unsafe. The latter can mean a child is removed unnecessarily, with serious and lasting consequences .

This challenges the logic of workers "erring on the side of caution" in their decision-making.

If caution means reflexive removal, it can create a different form of damage. Here, the case for predictive analytics is arguably strong.

Should 'do nothing' stay an option?

In New Zealand, there are obvious sociological factors that make this issue more complex. One is the risk that existing patterns of inequality are reproduced, because Māori are disproportionately represented in child protection pathways.

That pattern is not unique to Aotearoa: in Australia, Aboriginal and Torres Strait Islander children are around 11 times as likely as non-Indigenous children to be in out-of-home care. That is why Indigenous data sovereignty cannot be an afterthought in any moves to use predictive modelling.

Nor is it enough to simply say a model is "evidence-based". Agencies need to be clear about what data is being used, what it is trying to optimise, how decisions can be overridden, how bias is monitored and who can challenge it.

It may seem safer to reject these tools on perceived moral grounds. Often, it is simply the more familiar choice.

But doing so does not create a neutral system - it means relying on inconsistent judgements made under pressure, with uneven information and little ability to test whether decisions are improving.

Predictive analytics will not fix deeper system failures. But, if carefully governed, it can help prioritise urgency, target support and make decisions more transparent and informed.

The Conversation

Dylan A Mordaunt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).