Extra Email: Game-Changer for Public Policy Reform

Australian Treasury

Every year, thousands of Australian charities are required to file an Annual Information Statement. These filings keep the public Charity Register accurate, help donors know where their money is going, and underpin trust in the sector. Yet despite regular reminders from the regulator, a persistent number of charities still file late.

In the past, efforts to boost compliance drew on a mix of practical tools and professional judgment. Officials might introduce a new communications strategy, simplify the form, or send out further reminders. Some of these steps worked better than others, but assessing impact was difficult. If compliance rates rose, the change was credited; if not, further adjustments were tried.

This year, the Australian Charities and Not‑for‑profits Commission (ACNC) took a different approach. With the help of the Australian Centre for Evaluation and the Behavioural Economics Team of the Australian Government, it ran a randomised trial. Fifteen thousand charities were split into 2 groups. Both received the standard reminder email to the organisation's official address. But half also had an additional reminder sent directly to one of the people legally responsible for running the charity.

The results were clear. Charities that received the extra email were almost 6 percentage points more likely to file on time, and they filed around 3 days earlier on average. That small change - sending a second message to the right person - translated into hundreds more charities meeting their obligations, improving transparency, and reducing last‑minute scrambles for extensions.

It might seem modest. An extra email hardly qualifies as a bold reform. But the point is not the size of the intervention, it's the certainty of the evidence. Instead of relying on assumptions, the regulator asked a simple but radical question: does it work?

Randomised trials are common in medicine. From pharmaceuticals to surgical procedures, randomised studies are a major tool that medical researchers use to decide what works. As patients, we take it for granted that treatments must be proven safe and effective before they are rolled out at scale. In policy, the picture is more mixed: while many programs are carefully designed and monitored, evaluation has not always used the same tools that medicine employs. That is why each new randomised policy trial adds so much value: it brings greater clarity about what works, for whom, and in what circumstances.

The ACNC's study is a good example of this. By using randomisation, it created 2 comparable groups: one that received a new, additional reminder, and one that didn't. If outcomes diverge, we can be confident the difference comes from the intervention rather than chance. Trials like this are a practical way of turning promising ideas into reliable knowledge, and doing so in an ethical way - the study was approved by an independent ethical review panel.

This is far from Australia's first foray into randomisation. A randomised evaluation of the NSW Drug Court showed that putting offenders through a rigorous treatment program led to lower reoffending rates than if they had gone through the traditional criminal justice processes.

Sometimes trials reveal surprising results. In Perth, a large trial tested a popular program that gave teenage girls life‑like baby dolls to simulate the responsibilities of parenthood. The idea was to reduce teenage pregnancies by making parenthood look less glamorous. The results showed the opposite. Girls who took part in the program were actually more likely to experience a teen pregnancy than those in the control group. Without a trial, this program might have continued to expand, backed by good intentions but delivering the wrong outcome.

Momentum for rigorous evidence is growing. Last year, the Paul Ramsay Foundation committed $2.1 million to support 7 new low‑cost randomised evaluations, each receiving up to $300,000. More than 100 applicants put their hands up, reflecting just how much interest there is across the charity sector in testing ideas rigorously. These projects are now building a pipeline of Australian studies that complement government efforts and broaden the evidence base.

These examples underline how small, well‑designed interventions can shift behaviour in ways that matter, and how even well‑meaning ideas can backfire. They also highlight the limits of intuition. Without trials, who could have been sure that drug courts cut crime, or that infant simulators increase rates of teen pregnancy?

The ACNC experiment shows what is possible when agencies embrace testing as part of their core business. Announcing a new initiative can sound bold. Saying 'we'll trial it first' can sound cautious. But the truth is the opposite. There is nothing cautious about putting an idea to the test and being ready to learn from the results.

Of course, trials are not free. The ACNC's additional reminder created some administrative work: cleaning email lists, handling a small rise in calls to the help centre. And over time, processes can be automated and refined. The bigger prize is a regulatory system that learns and improves, building on each new piece of evidence.

This study has taught how a small change can make a difference. The next step is to keep looking for other areas where testing can sharpen our understanding, refine our policies, and strengthen public trust.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.