Mattel And OpenAI Have Partnered Up - Here's Why Parents Should Be Concerned About AI In Toys

Mattel may seem like an unchanging, old-school brand. Most of us are familiar with it - be it through Barbie, Fisher-Price, Thomas & Friends, Uno, Masters of the Universe, Matchbox, MEGA or Polly Pocket.

Author

  • Andrew McStay

    Professor of Technology & Society, Bangor University

But toys are changing. In a world where children grow up with algorithm-curated content and voice assistants, toy manufacturers are looking to AI for new opportunities.

Mattel has now partnered with OpenAI , the company behind ChatGPT, to bring generative AI into some of its products. As OpenAI's services are not designed for children under 13, in principle Mattel will focus on products for families and older children.

But this still raises urgent questions about what kind of relationships children will form with toys that can talk back, listen and even claim to "understand" them. Are we doing right by kids, and do we need to think twice before bringing these toys home?

Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK's latest coverage of news and research, from politics and business to the arts and sciences.

For as long as there have been toys, children have projected feelings and imagined lives onto them. A doll could be a confidante, a patient or a friend.

But over recent decades, toys have become more responsive. In 1960, Mattel released Chatty Cathy, which chirped "I love you" and "Let's play school". By the mid-1980s, Teddy Ruxpin had introduced animatronic storytelling. Then came Furby and Tamagotchi in the 1990s, creatures requiring care and attention, mimicking emotional needs.

The 2015 release of "Hello Barbie", which used cloud-based AI to listen to and respond to children's conversations, signalled another important, albeit short-lived, change. Barbie now remembered what children told her, sending data back to Mattel's servers. Security researchers soon showed that the dolls could be hacked , exposing home networks and personal recordings.

Putting generative AI in the mix is a new development. Unlike earlier talking toys, such systems will engage in free-flowing conversation. They may simulate care, express emotion, remember preferences and give seemingly thoughtful advice. The result will be toy that don't just entertain, but interact on a psychological level. Of course, they won't really understand or care, but they may appear to.

Details from Mattel or Open AI are scarce. One would hope that safety features will be built in, including limitations on topics and pre-scripted responses for sensitive themes and when conversations go off course.

But even this won't be foolproof. AI systems can be "jailbroken" or tricked into bypassing restrictions through roleplay or hypothetical scenarios. Risks can only be minimised, not eradicated.

What are the risks?

The risks are multiple. Let's start with privacy. Children can't be expected to understand how their data is processed. Parents often don't either - and that includes me. Online consent systems nudge us all to click "accept all", often without fully grasping what's being shared.

Then there's psychological intimacy. These toys are designed to mimic human empathy. If a child comes home sad and tells their doll about it, the AI might console them. The doll could then adapt future conversations accordingly. But it doesn't actually care. It's pretending to, and that illusion can be powerful.

This creates potential for one-sided emotional bonds, with children forming attachments to systems that cannot reciprocate. As AI systems learn about a child's moods, preferences and vulnerabilities, they may also build data profiles to follow children into adulthood.

These aren't just toys, they're psychological actors.

A UK national survey I conducted with colleagues in 2021 about possibilities of AI in toys that profile child emotion found that 80% of parents were concerned about who would have access to their child's data. Other privacy questions that need answering are less obvious, but arguably more important.

When asked whether toy companies should be obliged to flag possible signs of abuse or distress to authorities, 54% of UK citizens agreed - suggesting the need for a social conversation with no easy answer. While vulnerable children should be protected, state surveillance into the family domain has little appeal.

Yet despite concerns, people also see benefits. Our 2021 survey found that many parents want their children to understand emerging technologies. This leads to a mixed response of curiosity and concern. Parents we surveyed also supported having clear consent notices, printed on packaging, as the most important safeguard.

My more recent 2025 research with Vian Bakir on online AI companions and children found stronger concerns. Some 75% of respondents were concerned about children becoming emotionally attached to AI. About 57% of people thought that it is inappropriate for children to confide in AI companions about their thoughts, feelings or personal issues (17% thought it is appropriate, and 27% were neutral).

Our respondents were also concerned about the impact on child development, seeing scope for harm.

In other research , we have argued that current AI companions are fundamentally flawed. We provide seven suggestions to redesign them, involving remedies for over-attachment and dependency, removal of metrics based on extending engagement though personal information disclosure and promotion of AI literacy among children and parents (which represents a huge marketing opportunity by positively leading social conversation).

What should be done?

It's hard to know how successful the new venture will be. It might be that that Empathic Barbie goes the way of Hello Barbie, to toy history. If it does not, the key question for parents is this: whose interests is this toy really serving, your child's or that of a business model?

Toy companies are moving ahead with empathic AI products, but the UK, like many countries, doesn't yet have a specific AI law. The new Data (Use and Access) Act 2025 updates the UK's data protection and privacy and electronic communications regulations, recognising need for strong protections for children. The EU's AI Act also makes important provisions.

International governance efforts are vital. One example is IEEE P7014.1 , a forthcoming global standard on the ethical design of AI systems that emulate empathy (I chair the working group producing the standard).

The organisation behind the standard, the IEEE, ultimately identifies potential harms and offers practical guidance on what responsible use looks like. So while laws should set limits, detailed standards can help define good practice.

The Conversation approached Mattel about the issues raised in this article and it declined to comment publicly.

The Conversation

Andrew McStay is funded by EPSRC Responsible AI UK (EP/Y009800/1) and is affiliated with IEEE.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).