Around the world, lawmakers are grappling with how to better protect young people from online harms such as cyberbullying, sexual exploitation and AI-generated "deepfake" images .
Authors
- Claire Henry
Associate Professor in Screen, Flinders University
- Michael S. Daubs
Senior Lecturer in Media, Film, and Communication, University of Otago
Recent reforms overseas - notably Australia's landmark move to restrict young people's access to social media - have sharpened debate about how far governments should go.
Despite past and current efforts - including a government inquiry shortly due to report its final findings - New Zealand arguably lags other developed countries in tackling a problem that is growing more serious and complex by the year.
In 2026, the question facing the government is whether to cautiously follow overseas models, or to use this moment to develop a response better suited to its own legal, social and cultural context.
What is online harm?
Online harm can take many forms, including exposure to illegal material , AI-driven racial bias, and the non-consensual sharing of intimate images. As Netsafe highlights , online abuse and harassment can unfold across social media, messaging apps, email and text, and often involves repeated or sustained behaviour.
New Zealand's legislative response has developed gradually over the past decade. A major step was the Harmful Digital Communications Act 2015 , which introduced civil and criminal penalties for serious online abuse and established Netsafe as the approved agency for complaints and dispute resolution.
Since then, governments have attempted broader reform. In 2018, the Department of Internal Affairs launched a wide-ranging regulatory review, followed in 2021 by the Safer Online Services and Media Platforms review , which aimed to modernise online safety protections and oversight.
However, that process stalled and in May 2024 the review was terminated by Internal Affairs Minister Brooke van Velden. A year later, the government launched a new inquiry into "the harm young New Zealanders encounter online".
In the meantime, New Zealand's fragmented and increasingly outdated regulatory framework is struggling to keep pace with fast-evolving digital risks.
What can NZ learn from other countries?
Many submissions to the government's latest inquiry urged New Zealand to learn from overseas experience , while others noted that not all of those solutions would work at home.
InternetNZ argued that as a small and relatively late mover, New Zealand can "piggyback" on reforms in larger markets, so long as it ensured they reflect the country's "unique local context, both socially and practically". The Inclusive Aotearoa Collective - Tāhono similarly stressed the need to protect sovereignty.
Others argued New Zealand should draw on its reputation for innovation and develop its own culturally appropriate approaches.
Amokura Panoho of Pou Tangata Online Safety , for instance, called for updating the Harmful Digital Communications Act to address emerging AI harms such as deepfakes, and creating new Māori-led reporting pathways tailored for young Māori to seek help. Advocates argue this could allow New Zealand to anticipate future risks rather than chase them.
Australia's move to ban social media for under-16s has loomed large over the inquiry. While France and the United Kingdom are considering similar bans, there are concerns blanket age restrictions can be blunt instruments and that young people often find ways around age-verification systems .
This international focus was reinforced in the inquiry's interim report , which drew heavily on models from Australia, the UK, Ireland and the European Union. But submitters also pointed to other lessons, including the UK's Internet Watch Foundation , South Korea's online safety framework and California's youth privacy laws .
A further complication is that many international reforms remain largely untested. Australia's Online Safety Act is still being rolled out in phases, while the EU's Digital Services Act only entered full force in early 2024. As a result, evidence about their effectiveness remains limited.
The case for a national regulator
One of the clearest options emerging from the inquiry is the creation of a national online safety regulator: a model already adopted in several comparable countries, including Australia, the UK and Ireland.
In the UK, communications regulator Ofcom oversees the Online Safety Act 2023 , while Australia's eSafety Commissioner was granted expanded powers under the Online Safety Act 2021.
A 2021 Department of Internal Affairs report concluded that a central regulator in New Zealand could streamline oversight, provide a single point of contact and improve enforcement. The inquiry's interim report reached a similar conclusion, pointing to the benefits of coordinated regulation and proactive "safety by design" rules.
But reform has been slowed by political caution, particularly around concerns about freedom of expression. The government's preference for light-touch regulation has left gaps - notably in addressing emerging harms such as sexualised deepfakes - prompting ACT MP Laura McClure's member's bill aimed at closing some of those loopholes.
The inquiry's final report, and the government's response to it, offer a rare opportunity to reset direction. The challenge will be to move beyond piecemeal reform and design a system capable of keeping pace with rapid technological change, while placing the voices of young people and Māori at its centre.
![]()
Claire Henry receives funding from the Australian Research Council as a DECRA Fellow. She previously received a research grant from InternetNZ (2018) for an unrelated project on "Preventing child sexual offending online through effective digital media."
Michael S. Daubs was commissioned by the Department of Internal Affairs to co-author the 2021 report with Peter Thompson.