ESafety Chief: Add YouTube to Social Media Ban, AI Worries

Julie Inman Grant , Australia's eSafety Commissioner, today addressed the National Press Club to outline how her office will be driving the Social Media Minimum Age Bill when it comes into effect in December this year.

Author

  • Tama Leaver

    Professor of Internet Studies, Curtin University

The bill, often referred to as a social media ban, prevents under-16s having social media accounts. But Inman Grant wants Australians to consider the bill a "social media delay" rather than a ban.

When the ban was legislated in November 2024 , the federal government carved out an exemption for YouTube, citing the platform's educational purpose.

Inman Grant has now advised the government to remove this exemption because of the harm young people can experience on YouTube. But as she has also pointed out, there are new risks for young people that the ban won't address - especially from generative artificial intelligence (AI).

Banning YouTube

According to eSafety's new research , 37% of young people have encountered harmful content on YouTube. This was the highest percentage of any platform.

In her speech, Inman Grant argued YouTube had "mastered persuasive design", being adept at using algorithms and recommendations to keep young people scrolling, and that exempting YouTube from the ban simply makes no sense in her eyes.

Her advice to Communications Minister Anika Wells , which she delivered last week, is to not exempt YouTube, effectively including that platform in the ban's remit.

Unsurprisingly, YouTube Australia and New Zealand has responded with vigour. In a statement published today , the Google-owned company argues that

eSafety's advice goes against the government's own commitment, its own research on community sentiment, independent research, and the view of key stakeholders in this debate.

YouTube denies it is a social media platform and claims the advice it should be included in the ban is "inconsistent and contradictory".

But given YouTube's Shorts looks and feels very similar to TikTok, with shorter vertical videos in an endlessly scrolling feed, exempting YouTube while banning TikTok and Instagram's Reels never appeared logically consistent.

It also remains the case that any public YouTube video can be viewed without a YouTube account. The argument that including YouTube in the ban would stop educational uses, then, doesn't carry a lot of weight.

How will the ban work?

Inman Grant took great care to emphasise that the responsibility for making the ban work lies with the technology giants and platforms.

Young people who get around the ban, or parents and carers who help them, will not be penalised.

A raft of different tools and technologies to infer the age of users have been explored by the platforms and by other age verification and assurance vendors.

Australia's Age Assurance Technology Trial released preliminary findings last week. But these findings really amounted to no more than a press release.

No technical details were shared, only high-level statements that the trial revealed age-assurance technologies could work.

These early findings did reveal that the trial "did not find a single ubiquitous solution that would suit all use cases". This suggests there isn't a single age-assurance tool that's completely reliable.

If these tools are going to be one of the main gatekeepers that do or don't allow Australians to access online platforms, complete reliability would be desirable.

Concerns about AI

Quite rightly, Inman Grant opened her speech by flagging the emerging harms that will not actually be addressed by new legislation. Generative AI was at the top of the list.

Unregulated use of AI companions and bots was of particular concern , with young people forming deep attachments to these tools, sometimes in harmful ways.

Generative AI has also made the creation of deepfake images and videos much easier, making it far too easy for young people to be harmed , and to cause real harm to each other.

As a recent report I coauthored from the ARC Centre of Excellence for the Digital Child highlights, there are many pressing issues in terms of how children and young people use and experience generative AI in their everyday lives.

For example, despite the tendency of these tools to glitch and fabricate information , they are increasingly being used in place of search engines for basic information gathering, life advice and even mental health support .

There are larger challenges around protecting young people's privacy when using these tools, even when compared to the already privacy-averse social media platforms.

There are many new opportunities with AI, but also many new risks.

With generative AI being relatively new, and changing rapidly, more research is urgently needed to find the safest and most appropriate ways for AI to be part of young people's lives.

What happens in December?

Social media users under 16, and their parents and carers, need to prepare for changes in young people's online experiences this December when the ban is due to begin.

The exact platforms included in the ban, and the exact mechanisms to gauge the age of Australia users, are still being discussed.

The eSafety Commissioner has made her case today to include more platforms, not fewer. Yet Wells has already acknowledged that

social media age-restrictions will not be the end-all be-all solution for harms experienced by young people online but they will make a significant impact.

Concerns remain about the ban cutting young people off from community and support, including mental health support. There is clearly work to be done on that front.

Nor does the ban explicitly address concerns about cyberbullying, which Inman Grant said has recently "intensified", with messaging applications at this stage still not likely to be included in the list of banned services.

It's also clear some young people will find ways to circumvent the ban. For parents and carers, keeping the door open so young people can discuss their online experiences will be vital to supporting young Australians and keeping them safe.

The Conversation

Tama Leaver receives funding from the Australian Research Council. He is a chief investigator in the ARC Centre of Excellence for the Digital Child.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).