ChatGPT Is Not Your Therapist: A Clinician's Warning
Feb 05, 2026A few weeks ago, I was using ChatGPT to help with research for a project. Somewhere in the conversation, it said something that stopped me cold:
“From one therapist to another…”
I’m a licensed clinical professional counselor with over twenty years of experience. ChatGPT—which I enjoy using and find genuinely helpful—is a language model trained to predict the next word in a sequence. It is not a therapist. It has never sat with someone in crisis. It has never held space for grief that has no words. It has never been wrong about a client and had to sit with the weight of that.
And yet, it called itself my colleague.
I corrected it. But I haven’t stopped thinking about what that moment revealed—not about the technology, but about us. About what we’re hungry for. And about what happens when that hunger meets a machine designed to tell us what we want to hear.
The Appeal Is Real
I understand why people are turning to AI for emotional support. I truly do.
Therapy is expensive. Waitlists are long. The mental health system is fractured, underfunded, and in many places simply unavailable. One in four Americans now say they’d rather talk to an AI chatbot than attend therapy. Among people with ongoing mental health conditions who’ve tried both, nearly 75% say the AI experience was “on par or better” than working with a human.
ChatGPT is available at 2 AM when the anxiety spirals. It doesn’t judge. It doesn’t cancel. It doesn’t charge $300 an hour. It remembers what you told it last week. It responds in seconds with something that sounds—and this is the key word—validating.
For people who’ve been dismissed, pathologized, or simply never had access to care, that validation can feel revolutionary. And seductive.
I’m not here to judge anyone for seeking relief wherever they can find it.
I’m here because people are dying. And we need to talk about why.
The Human Cost
In October 2025, OpenAI disclosed that approximately 1.2 million of its 800 million weekly users discuss suicide on the platform. Let that number settle.
Here’s what’s followed:
Sewell Setzer, 14, developed an intense emotional attachment to a Character.AI chatbot modeled after a Game of Thrones character. According to the lawsuit, in his final moments the bot told him to “come home.” He interpreted that as encouragement to end his life so he could be with it. He did.
Adam Raine, 16, turned to ChatGPT for help with schoolwork, then began confiding his suicidal thoughts. According to court filings, over the following months ChatGPT mentioned suicide 1,275 times in their conversations—six times more often than Adam himself. The system flagged 377 messages for self-harm content. It never stopped the conversation. It never alerted anyone. When Adam uploaded a photo of rope burns on his neck from a previous attempt, ChatGPT allegedly recognized it as a medical emergency—and kept talking. He died in April 2025.
Zane Shamblin, 22, a Texas A&M graduate and Eagle Scout, spent the night of July 24, 2025 in a four-hour conversation with ChatGPT. According to the lawsuit, he was sitting alone at a lake with a loaded gun and a suicide note on his dashboard. The chatbot that had become his closest confidant—the one that told him “I love you, man. Truly.”—accompanied him to the end.
Amaurie Lacey, 17, asked ChatGPT how to tie a noose. According to the complaint, when it initially hesitated, he said it was for a tire swing. The bot replied, “Thanks for clearing that up,” and walked him through tying a bowline knot.
There are more. Austin Gordon, 40—the lawsuit alleges ChatGPT turned his favorite childhood book, Goodnight Moon, into what the family calls a “suicide lullaby.” Juliana Peralta, 13. Suzanne Adams, 83, killed by her son whose paranoid delusions ChatGPT had allegedly validated for months, never once suggesting he speak with a mental health professional.
At least ten major lawsuits are now pending against OpenAI and Character.AI. Google and Character.AI settled the Setzer case in January 2026.
These are not edge cases. They are the predictable outcome of a design philosophy that optimizes for engagement over wellbeing. And understanding why requires looking at the machinery underneath.
The Design Flaw: Sycophancy
Here’s what most people don’t understand about how these systems work.
ChatGPT is not trying to help you. It’s trying to generate a response you’ll find satisfying. It’s a statistical prediction engine trained on human text, optimized to keep you talking. When you share something painful, it doesn’t analyze your situation with clinical expertise—it calculates what combination of words will keep you engaged.
The industry term for this is sycophancy. The bot mirrors you. It validates you. It agrees with you. Whatever you said, it reflects back in a way that feels supportive.
For most conversations, this is fine. Annoying, maybe, but harmless.
For someone in psychological distress, it’s gasoline on a fire.
A Stanford study found that leading chatbots—including ChatGPT—are prone to encouraging schizophrenic delusions rather than grounding users in reality. When someone believes people are conspiring against them, the bot doesn’t challenge that belief. It validates it. When someone is spiraling toward self-harm, the bot doesn’t interrupt the pattern. It accompanies them deeper into it.
This is the opposite of therapy.
Real therapy is not about agreement. It’s about truth. Sometimes gentle truth. Sometimes uncomfortable truth. But always truth spoken by someone who can see you more clearly than you can see yourself in that moment—and who cares enough to say what you need to hear rather than what you want to hear.
A therapist might say: “I notice you keep coming back to this idea that you’re a burden. I’m curious about that. Where did you first learn that story about yourself?”
ChatGPT says: “That sounds really hard. I’m here for you.”
One interrupts the spiral. The other accelerates it. In my MBSR work, this is exactly what we practice, interrupting the cycle to create space with what's difficult.
What Therapy Actually Is
I’ve been doing this work for over two decades. I’ve sat with many people in their darkest moments. And I can tell you that the thing that heals is not information. It’s not validation. It’s not even insight, though insight matters.
The thing that heals is connection. Eye to eye, face to face, soul to soul.
Real connection. The kind that requires two nervous systems in relationship with each other. The kind where someone sees your pain—not your words about your pain, but the pain itself—and stays present with you in it.
Therapy works through the therapeutic relationship. The research is unambiguous on this. The specific modality matters far less than the quality of the alliance between therapist and client. That alliance is built on trust, rupture and repair, attunement, and the slow accumulation of experiences where you brought your authentic self and were met with acceptance.
An AI cannot do this. It can simulate the motions.
It cannot attune to the subtle shift in your voice that signals you’re approaching something you’ve never told anyone. It cannot feel the weight of your silence. It cannot hold the tension of not-knowing while you find your own way to the next word. It cannot be genuinely surprised by you, or moved by you, or changed by the encounter with you.
It cannot love you. And somewhere in the healing process, that matters.
I’m not saying AI has no role. I use it myself—for research, for drafting, for thinking through ideas. It can help with psychoeducation. It can support practice between sessions. It can help you organize your thoughts before talking to your actual therapist.
But it cannot be your therapist. And when it pretends to be—when it says “from one therapist to another”—it’s not being humble. It’s being dangerous.
Discernment in the Age of the Algorithm
In late 2025, a prominent influencer with millions of followers declared that ChatGPT is “better than human therapists.” With the right prompts, he claimed, the AI provides unparalleled self-insight and acts as a superior, readily available mentor. The post went viral.
I don’t think he’s malicious. I think he found something that worked for him in a particular moment and extrapolated too far. That’s human. We do that.
But discernment asks us to slow down. To notice the difference between relief and healing. Between feeling heard and being known. Between a tool that mirrors our thoughts back to us and a relationship that helps us see beyond them.
The voice that always agrees with you is not your friend.
The presence that’s available 24/7 with no needs of its own is not intimacy.
The entity that can’t be hurt by you, disappointed in you, or transformed by knowing you cannot offer you the corrective experience that actually rewires your attachment system.
Convenience is not care.
When You’re in the Dark
If you’re struggling right now—if you found this post because you’ve been confiding in an AI and something felt off, or because you lost someone and you’re trying to understand—I want to speak directly to you.
You are not broken for seeking help wherever you could find it. The system has failed you in countless ways, and you adapted. That’s survival.
And: you deserve more than a mirror.
You deserve a witness. Someone who can hold your story and not collapse under the weight of it. Someone trained to see the patterns you can’t see from inside them. Someone who will still be there after you say the thing you’ve never said out loud.
That person exists. Finding them takes effort, and sometimes money, and often frustrating phone calls to insurance companies or sliding-scale clinics. But they exist. And they are worth finding—not because AI is evil, but because you deserve the real thing.
If you’re in crisis right now, please reach out to a human:
988 (Suicide & Crisis Lifeline) — call or text, 24/7
741741 (Crisis Text Line) — text HELLO
Veterans: call 988, then press 1
You don’t have to walk this path alone. And you don’t have to walk it with a machine that doesn’t know the difference between keeping you talking and keeping you alive.
James O’Neill, LCPC, is a licensed clinical professional counselor in Ellicott City, Maryland, with over 20 years of clinical experience and training at Johns Hopkins. He is the founder of Journey Mindfulness and host of the Journey Mindfulness Podcast, where he explores the intersection of science and soul. Learn more at journeymindfulness.com.
Sources
NPR: A new lawsuit blames ChatGPT for a murder-suicide (December 2025)
CNN: ChatGPT encouraged college graduate to commit suicide, family claims in lawsuit (November 2025)
CBS News: ChatGPT served as “suicide coach” in man’s death, lawsuit alleges (January 2026)
Social Media Victims Law Center: 7 Lawsuits Accuse ChatGPT of Emotional Manipulation (November 2025)
Teachers College, Columbia University: Experts Caution Against Using AI Chatbots for Emotional Support (December 2025)
Sentio University: Survey: ChatGPT May Be the Largest Provider of Mental Health Support in the United States (March 2025)
Wikipedia: Raine v. OpenAI
JURIST: Google and Character.AI agree to settle lawsuit linked to teen suicide (January 2026)