ESafety Mandates AI Chatbots Ensure Aussie Kids' Safety

Australia's eSafety Commissioner has issued legal notices to four popular AI companion providers requiring them to explain how they are protecting children from exposure to a range of harms, including sexually explicit conversations and images and suicidal ideation and self-harm.

Notices were given to Character Technologies, Inc. (character.ai), Glimpse.AI (Nomi), Chai Research Corp (Chai), and Chub AI Inc. (Chub.ai) under Australia's Online Safety Act.

The notices require the four companies to answer a series of questions about how they are complying with the Government's Basic Online Safety Expectations Determination. The notices require these providers to report on the steps they are taking to keep Australian's safe online, especially children.

eSafety Commissioner Julie Inman Grant said these AI Companions, powered by generative AI, simulate personal relationships through human-like conversations and are often marketed for friendship, emotional support, or even romantic companionship.

"But there can be a darker side to some of these services with many of these chatbots capable of engaging in sexually explicit conversations with minors. Concerns have been raised that they may also encourage suicide, self-harm and disordered eating.

"We are asking them about what measures they have in place to protect children from these very serious harms," Ms Inman Grant said.

"AI companions are increasingly popular, particularly among young people. One of the most popular, Character.ai, is reported to have nearly 160,000 monthly active users in Australia as of June this year.

"These companies must demonstrate how they are designing their services to prevent harm, not just respond to it. If you fail to protect children or comply with Australian law, we will act."

AI companion providers which fail to comply with a reporting notice could face enforcement action, including court proceedings and financial penalties of up to $825,000 per day.

The notices follow the recent registration of new industry-drafted codes in Australia designed to protect children from exposure to a range of age-inappropriate content that has been occurring at increasingly young ages. These new codes would also apply to the growing number of AI chatbots that have had limited obligations until recently.

"I do not want Australian children and young people serving as casualties of powerful technologies thrust onto the market without guardrails and without regard for their safety and wellbeing," Ms Inman Grant said.

The codes and standards are legally enforceable and breach of a direction to comply may result in civil penalties of up to $49.5 million.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.