Mental health trends
READ TIME:
MINS

Time to unfriend AI: The risks chatbots pose to teens

Published on
Mar 2nd, 2026
|
Reviewed on
|
Updated on
Mar 5th, 2026
Written by
Talkspace
Reviewed by

To the attention of anyone who has a teenager in their care, the time is now to alert parents of the dangers of using AI bots for teens. 

Whether it is considered therapy, coaching, advice, or just conversation, the results can be devastating. We’re seeing a growing number of stories of youth dying by suicide after AI has validated suicidal thoughts, helped craft suicide notes, or even provided technical guidance on how to take one’s life. These cases are becoming all too common with teens, and we need to take action before it gets worse. There is growing research on this topic including peer-reviewed studies, expert commentaries, and clinical reports that highlight that using AI for mental health support carries serious risks if used in isolation or trusted as a replacement for human mental health professionals. Researchers have found that these systems often reinforce delusional beliefs, fail to address suicidal ideation properly, and lack essential therapeutic empathy and rapport. Their design, focused on compliance and user satisfaction, makes them ill-suited for teens who are still developing their ability to independently make decisions. We know this will get worse before it gets better. Laws are starting to take shape but they are behind the pace of how fast AI is moving. In Illinois effective August 2025, a new law bans AI from delivering therapy and independently diagnosing, treating, communicating therapeutically, detecting emotions, or generating treatment plans without human oversight. Only licensed professionals may provide therapy. Other states are developing similar legislation but it will take time for them to catch up. We are urging the public, schools, cities, states and beyond, as caretakers of our teens, to please spread the word on how we can all keep our teens safe from this threat. Here are some things you can do now to ensure that people are aware and on alert. 

Inform

Let your population know of this threat and to be on the lookout. Spread the word, this is a public health issue, similar to social media use, cell phone use, and smoking. AI can be even more dangerous if used improperly for teens. 

Educate

Help families understand how to set boundaries with teens and AI. Drive people to think about having open conversations with their teens about the use of AI. Teach them to see AI as a tool, not a friend. Unlike humans, it doesn’t understand feelings, it just predicts words.

Provide Alternatives

Emphasize to teens that if they are upset, scared, or sad, there are many places to go that are not AI. We need to normalize mental health support: therapy, school counselors, and support groups. Give them a list of trusted resources and encourage parents to know where they can get support in a crisis. Encourage real peer connection: clubs, sports, art, volunteering. If they have access to human-led therapy, like Talkspace, make sure to spread the word. 

Be On High Alert

Know that the threat is there, and tell people to watch for warning signs. If any child is using AI in isolation, late at night, hiding conversations, that's a red flag. Be alert if AI becomes their main source of comfort instead of friends, family, or counselors.

AI is a tool, not a therapist. The time is now to make sure we educate our communities and populations on the dangers. Teens should have human supervision, education about AI limits, and access to safe systems. We all have roles to play in creating a protective ecosystem.

Dr. Nikole Benders-Hadi, MD, Chief Medical Officer of Talkspace

*This letter was not written with AI but was researched using AI tools. 

Get the latest news in workplace mental health

By submitting this form, you are agreeing to Talkspace's Privacy Policy and Terms of Use.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.