AI Chatbots Raise Teen Mental Health Concerns

Rice University

As chatbots powered by artificial intelligence become more embedded in young people's daily lives, researchers, parents and regulators are raising concerns about their potential impact on adolescent mental health. Recent reports suggest a large majority of children and teens have used AI chatbots, sometimes turning to them for emotional support or advice - roles experts caution these tools are not equipped to handle safely.

Experts say teens are increasingly turning to digital platforms and AI tools for connection and support - raising new questions about well-being and safeguards.
Experts say teens are increasingly turning to digital platforms and AI tools for connection and support - raising new questions about well-being and safeguards.

At the same time, regulators are taking notice. The Federal Trade Commission recently launched an inquiry into AI chatbots acting as "companions," citing concerns about how these systems may interact with users, particularly minors.

Efforts by some companies to restrict teen access have drawn skepticism as experts note that age bans and moderation tools are often easy to bypass and do not fully address underlying risks.

Tianjun Sun, assistant professor of psychological sciences at Rice University, specializes in human-AI interaction, psychological assessment and the responsible design of AI systems used in high-stakes contexts such as hiring, education and skill development. Her research examines how AI systems shape human judgment, behavior and developmental outcomes - particularly when users rely on AI tools for evaluation, guidance or support.

"Adolescents are developing core emotional and social skills, and chatbots are not inherently designed to support that growth," Sun said. "When young people begin turning to AI as a substitute for human connection or advice, the risk is not just misinformation; it is the gradual reshaping of expectations for relationships, emotions and help-seeking in ways we do not yet fully understand or regulate."

Sun can speak to:

  • How interactions with chatbots differ from human relationships, drawing on research in social, developmental and human-technology interaction.
  • Why chatbots can appear supportive while still posing psychological risks - including emotional dependency and impacts on development and identity formation.
  • Why age-gating and content moderation fall short, and the need for psychologically informed design safeguards such as limits on emotional intimacy, clearer guardrails for advice-giving and transparency about chatbot capabilities.

Sun recently led a Rice study examining how AI-powered chatbots are used in hiring, highlighting broader questions about responsibility and trust when AI systems influence human outcomes.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.