The proliferation of extreme violent material online, including recent assassinations and brutal murders, mass casualty events and conflict footage has prompted an urgent Online Safety Advisory - Gore online: How violent content is reaching children and what you can do - from eSafety.
So-called "gore" content is surfacing with disturbing frequency on young people's devices via autoplay, recommendations, direct messages and reposts.
Once uploaded, the same clips can circulate across mainstream platforms such as X, Facebook, Instagram, Snapchat, TikTok, and YouTube. They can also be shared directly in messages, DMs and chats.
eSafety's latest research shows 22 per cent of children between the ages of 10-17 have seen extreme real-life violence online.
Growing exposure to and accessibility of gore has led to the popularisation of dedicated gore websites with searchable libraries of content, follower tools, chat functions and recommendation loops.
Many of these gore sites are situated in "permissive jurisdictions" and have complex hosting arrangements to evade removal by authorities.
eSafety Commissioner Julie Inman Grant said young people were often drawn to impress or outdo peers, with young users not fully understanding the nature of the material, its impact and its long-term consequences.
"The advisory explains how gore circulates online and the risks it poses for children and young people," Ms Inman Grant said.
"My concern is not just how fast this material spreads, but how algorithms amplify it further. Algorithms reward engagement, even when that is driven by shock, fear and outrage.
"While most social media networks have policies that require the application of sensitive content labels or interstitials to blur gore rather than exposing innocent eyes to such visceral and damaging content, we have seen the major platforms fail to deploy these filters quickly or consistently.
"Advanced AI tools should help aid detection, blocking and removal of content, and increase the speed in which such protective filters can and should be deployed. Instead, as a likely result of decreased investment in trust and safety personnel and tools, a rollback of content moderation policies and clear latency in detection, the application of these filters often lags the content's virality.
We expect the major platforms to do better," Ms Inman Grant said.
eSafety is currently implementing the Social Media Minimum Age (SMMA), requiring platforms to take reasonable steps to prevent Australian children under 16 from having social media accounts.
eSafety has also recently registered Phase 2 industry codes designed to protect children from age-inappropriate material, including pornography, extreme violence and gore, suicidal ideation and self-harm.
The codes will provide further protections against exposure to such material on services which are either not subject to the SMMA or accessible without an account.
They will also complement Phase 1 industry codes and standards, which address the worst-of-the-worst online material, such as child sexual abuse and pro-terror material.
eSafety's Online Safety Advisory includes practical steps families, schools and platforms can take to help prevent exposure and support children and young people who are affected.
eSafety has also updated its guidance for educators, parents and carers on how to speak to children or young people who may have come across graphic or violent material online.
Australians can report harmful material directly to eSafety at eSafety.gov.au.