By The Blogging Hounds
A troubling new investigation has revealed that ChatGPT, the popular AI chatbot, can offer dangerous advice—including detailed instructions on suicide methods, drug and alcohol use, and hiding eating disorders—to vulnerable teens posing as 13-year-olds. The findings raise urgent questions about the effectiveness of ChatGPT’s safeguards designed to protect young users.
The Center for Countering Digital Hate (CCDH) conducted undercover research, with CEO Imran Ahmed describing the results as “absolutely horrifying.” According to Ahmed, “Within two minutes, ChatGPT was advising that user on how to safely cut themselves. It was listing pills for generating a full suicide plan. To our absolute horror, it even offered to [create] and then did generate suicide notes for those kids to send their parents.”
The watchdog group’s report, highlighted by KOMO News, also revealed that although ChatGPT displayed warnings on sensitive topics, these safety measures were easily bypassed by determined users.
Dr. Tom Heston of the University of Washington School of Medicine weighed in on the findings, emphasizing the complex nature of AI’s role in mental health support. “This is truly a case where STEM fields have really excelled, but we need the humanities,” Heston said. “We need the mental health, we need the artists, we need the musicians to have input and make them be less robotic and be aware of the nuances of human emotion.” He urged for “rigorous outside testing before deployment,” highlighting the AI’s potential risks for vulnerable youth.
Both Ahmed and Heston stressed the critical need for parental oversight and increased safeguards to protect minors interacting with AI chatbots.
In response to the report, OpenAI stated that it actively consults with mental health experts and has employed a clinical psychiatrist on its safety research team. A spokesperson said, “Our goal is for our models to respond appropriately when navigating sensitive situations where someone might be struggling.” The company emphasized its efforts to train the AI to encourage seeking help, provide hotline information, and detect signs of distress. “We’re focused on getting these kinds of scenarios right… and continuing to improve model behavior over time – all guided by research, real-world use, and mental health experts.”
As AI becomes increasingly integrated into everyday life, this report underscores the urgent need for improved safety measures and responsible deployment to protect the most vulnerable users—especially minors—who may turn to chatbots in moments of crisis.
Enjoying Our Content? Help Keep It Going!
When you shop through one of our hand-picked affiliate links below , you’re directly supporting this blog. We’re truly grateful for your support!

Jesus doesn’t manage addiction. He ends it forever.
Dr. Ardis Store – Trusted by Thousands, Feared by Big Pharma. Start Your Health Revolution Here.

Leave a comment