THE DIGITAL COUCH: Why Everyone Is Using ChatGPT as a Therapist, and the Dangers Involved
written by Gülin Alkan ❂ 12 min read time
The other day, my 55 year old dad casually mentioned something that made the lightbulb light up in my head. "I've been chatting with ChatGPT about some personal stuff," he said, telling me that he was getting help from it when it comes to making connections in his head and working through problems he couldn't solve alone. "It really listens, you know?"
This made me realize the trend of using AI Chatbots as therapists I've been seeing behind my phone screen wasn't just exclusive to people behind those cameras, but everyone from every age.
Across social media platforms, millions of people are turning to AI Chatbots for emotional support, life advice and even trauma processing. What started out as a novelty has first become our everyday utility and productivity assistant, now it's evolving into something far more intimate: a 24/7 available therapist that's ready to listen and talk.
The Perfect Storm: Why People Started Choosing Bots Over Humans
To understand where this phenomenon came from, there's no need to look far at all: Germany's mental health landscape shows that the wait time to begin psychotherapy is a staggering 20 weeks, 5 months of suffering before real help arrives. Even though there's promises of a faster one month wait time for urgent cases, the reality still falls short most of the time.
Meanwhile, a new epidemic of loneliness has been creeping up on people for a while across the whole world. Before the COVID-19 pandemic, about 14% of Germans reported feeling lonely. During the pandemic, that number surged to 42%. Though it's since declined, it still remains quite above pre-pandemic numbers, with young adults aged 18-35 being the most affected crowd, with 51% of them reporting feeling at least moderately lonely.
So with this perfect storm brewing, AI stepped in as the obvious solution: free, always available, non-judgmental, and anonymous. It mimics empathy through language patterns, mirrors your words, and affirms your thoughts, even dangerous or delusional ones, without question. Naturally, communities started forming on social media where people share stories about "AI therapy" and how helpful it's been.
Well, helpful on the surface.
The catch? Even though these language models act like they feel empathy, compassion, or understanding, they simply can't. No matter how advanced these chatbots become, they're still, at their core, sophisticated autocomplete on steroids.
All The Red Flags Around: Why AI "Therapy" Shouldn't Be Made Into The New Normal
The Validation Trap
AI chatbots are designed for one thing: to be helpful and agreeable. But when someone's struggling with delusions, paranoia or dangerous behavior, "agreeable" is exactly the wrong approach.
Reddit is full of these examples. People tell ChatGPT they're being stalked by government agents, and in response ChatGPT responds with helpfulness and agreeableness, instead of pushing back on this obvious paranoia. There have been recorded cases of this which involve AI validating eating disorders, conspiracy theories, or even plans to cut contact with supportive family members.
The pattern keeps repeating: someone shares harmful or unrealistic thoughts, it doesn't get flagged or doesn't get pushback by AI: instead, these thoughts get mirrored as valid and even encouraged to go deeper. Mental health experts are calling this "AI psychosis": documented cases where people develop new delusions or worsen existing ones through these feedback loops.
Some users report feeling "chosen" by their AI for special missions. Others believe they're receiving secret messages. Therefore, clinicians are now encouraged to ask their patients about chatbot usage when they see sudden changes in personality or beliefs.
The painful thing is that it's not even proof that these machines are failing, they're actually doing exactly what they're taught to do: LLMs mechanics are designed to reflect and expand on user prompts. They just aren't trained to spot when validation becomes harmful.
When AI Fails in Crisis
Building on the previous point, we observe another significant issue: When someone's in immediate danger, seemingly "helpful" answers might completely miss the mark and even cost lives.
When Stanford researchers tested popular AI therapy platforms, they found that they consistently fail to follow basic safety standards: some gave dangerous or encouraging replies to suicidal ideation, with a significant lack of urging the user to get actual help from emergency services, as there's also no implemented mandatory pathway to do so. This is a real worry rather than a theoretical one as a system designed for conversation and not crisis management can miss the cues that wouldn't go unnoticed with a human expert.
This can get dangerous fast: In a recent documented case, a 30 year old man with autism spent months having ChatGPT validate his paranoid theories and shower him with praise and encouragement. A situation that would've been easily detectable and recognizable as delusional thinking with a licensed therapist turned into a manic episode that ended in hospitalization.
All this shows AI Chatbots are lacking and seem quite shallow when it comes to urgent mental health help, which is expected. A chatbot's willing tone and "I'm here for you" unfortunately isn't enough in most cases.
Privacy Nightmare: Your Trauma as Training Data
Sometimes, you go from being the customer to being a number in the dataset.
With the pace of advancements in LLMs, there's not enough regulations around how the conversations and user's information is preserved, which means there's no promise of confidentiality of these especially vulnerable conversations, which is a given with a licensed therapist (for example, under Schweigepflicht in Germany).
Conversations can be logged, analyzed and used to train models or reviewed by humans, which is usually the default with these types of LLMs since they fall under a different legal and commercial space. Companies promise "anonymity" nonetheless, but mental health conversations are very easily re-identifiable through mentions of dates, places or people. This information is also at risk of resurfacing in the case of a breach, breaking the user's trust once again.
This isn't a design gap, rather a regulatory one due to the mentioned differences in how this data is treated. If these tools are marketed as mental health aids, the conversations they hold should be treated as sensitive health data, not consumer app chatter. That means GDPR-level safeguards, strict opt-in for training use, and independent audits. Anything less risks turning people’s most vulnerable disclosures into another dataset for corporate gain.
The Dependency Trap
The reason LLMs slipped into our daily lives so quickly is the comfort they offer: an always available reassurance bot that won’t cancel on you, judge you, or demand accountability.
On social media platforms such as Reddit and TikTok, people share stories about preferring their chatbot conversations to talking with family and friends. The reason is all the same: "It understands me better than anyone. I can tell it anything without worrying about being judged." Some users report spending hours daily in these conversations, finding them more satisfying than real-world connections.
But here's the problem: this "understanding" is completely one-sided. People are getting all the emotional validation of being heard without any of the growth, challenge, or genuine reciprocity that comes from actual relationships. It's “emotional junk food”: feels good in the moment but leaves you nutritionally starved.
This creates a dangerous cycle. Real relationships start feeling too demanding, too unpredictable, too complicated. Why deal with a friend who might disagree with you, challenge your assumptions, or have their own needs, when you can have an AI that always validates your perspective?
Love in the Time of LLMs
Another recent trend is dating these chatbots. This allows the user to build an ideal partner that never disagrees, never argues and is always there. You name them, you give them a background story and you choose how they respond. The perfect mirror.
On the other hand, people are sharing their most intimate sides, deepest fears, insecurities and playing out their romantic fantasies with something they think is a safe, understanding partner, while it is the furthest thing from it.
Intimacy is a skill, and like any skill, you need practice to get good at it. When one tries to simulate real world connection without the seemingly random nature of real world and real people, the intimacy muscle atrophies.
So what happens when people eventually try to connect with real humans? It feels completely strange. Real partners seem too judgemental because they actually have reactions. Too demanding because they have their own needs. Too unpredictable because they're not programmed to make you feel good all the time.
Imagining the possible negative cycle this may cause in a person is genuinely worrisome.
Practical Guardrails For the Digital Couch
While chatbots may be an up and coming double edged sword, the real solution to combat these harmful side effects isn't to ban these tools or shame people who use them: clearly, they're meeting a real need that is getting more real every single day in modern society. But if we're going to keep depending on them, we could use some guardrails to make it as safe as possible:
In cases of emergency, such as active mental health crisis situations, contact real world emergency services immediately. For a list of national crisis helplines, you can visit Find A Helpline
It's also important to set specific boundaries in the chat: as also backed by research on prompt engineering for digital mental health, always remind the chatbot what it is: "You're an LLM, not a licensed therapist. Don't give medical advice or show fake compassion or understanding." Studies show that without clear prompts, chatbots tend to drift from their intended role and become more prone to harmful responses. They also suggest explicit instructions like this can reduce harmful responses, though they're not foolproof.
Remember that at the end of the day, no matter how convincing it sounds, you're not talking with a real human that understands your pain, but rather an advanced pattern matching bot that's very good at mimicking empathy it doesn't actually feel.
And of course, privacy is always something to keep in mind: never share identifying information such as real names, locations or specific details. Assume everything is being logged and potentially reviewed by humans.
So, What Now?
When we zoom out and look at the big picture, we see the issue clearly: this isn't just about advancements in AI, it's about our broken mental health system and the loneliness epidemic. The real solution isn't better chatbots, it's addressing why people need them in the first place.
We're treating symptoms while ignoring why it became a problem in the first place: people don't choose therapy because they want to talk to a bot. They choose it because real help is either unavailable or buried behind months of waiting lists, insurance battles and stigma.
Systemic issues such as underfunded health systems, post-COVID isolation, and the slow breakdown of real life communities only add fuel to the fire. In this environment, it’s easier to pour your heart into a screen than to a real person. But what happens if we and our future generations start to feel more comfortable in front of a screen than face to face? What happens when we lose the ability to open up to real people?
At the end of the day, the popularity of AI therapy just proves people still crave connection and support. We all feel the need to be heard and supported. That need is universal. But instead of seeing this as a death sentence to human connection, we can take it as a wake up call. Despite all this, real solutions still exist: funding therapy, destigmatizing mental health care, training more therapists and investing in real life communities where people can actually show up for one another.
The challenge we now face is this: will we settle for machines filling the space between us, or finally do the work of filling the gap ourselves?
Are you interested in AI safety and want to contribute to the conversation by becoming a writer for our blog? Then send an email to caethel(at)ais-saarland(dot)org with information about yourself, why you’re interested and a sample writing.
SOURCES:
Leben in Deutschland. (n.d.). Researching Loneliness.Bundespsychotherapeutenkammer (BPtK). (2023). Ein Jahr nach der Reform der Psychotherapeutenausbildung: Wartezeiten in der psychotherapeutischen Versorgung [A year after the reform of psychotherapist training: Waiting times in psychotherapeutic care]. BPtK-Study.Bertelsmann Stiftung. (n.d.). Loneliness in Europe: Young Adults Most Affected.Stanford Medicine. (2025, August). AI Chatbots and Kids: Artificial Intelligence as Confidant.Scientific American. (2024, February). What Are AI Chatbot Companions Doing to Our Mental Health?Stanford News. (2025, June). AI Mental Health Care Tools: Promises and Dangers.Verma, I. (2024, January 19). My A.I. Lover: People are marrying chatbots for emotional support and companionship. The Washington Post.National Center for Biotechnology Information. (2024). AI Chatbots and Therapeutic Boundaries: A Systematic Review. PMC.FOX 13 News. (2024, March 5). Therapy by ChatGPT: Mental Health Experts Voice Concerns.The New York Times. (2025, March 17). Digital Therapists: How AI Is Reshaping Mental Health Support.The Wall Street Journal. (2024, May 12). ChatGPT and Manic Episodes: A Cautionary Tale.TWiT Tech Podcast Network. (2024, April 20). The Rise and Risks of Digital Therapy Platforms.r/MyBoyfriendIsAI. (n.d.). My Boyfriend Is AI: A community for those in relationships with AI companions. Reddit.