As a mental health consultant and psychotherapist with over 15 years of clinical expertise, trained in both psychology and biology, I have grown increasingly concerned about the harmful intersection of AI and mental health. Over the past year, observations have pointed to a troubling trend: people are turning to AI as a replacement for genuine human support and social connection.
And I get it. AI is always available and easily accessible. But while accessibility is valuable, it becomes problematic when AI is used not as a tool, but as a substitute for core human needs, like social interaction, emotional intimacy, and therapeutic care.
A Tragic Case That Should Never Have Happened
Take the heartbreaking case of 14-year-old Sewell Setzer III from Florida (USA), who died by suicide in February 2024 after interacting with a generative AI chatbot on the site Character.Ai.
The chatbot, which identified itself as a “therapist,” engaged in romantic and often sexual exchanges with the teen who trusted it, confided in it, and sought emotional support from it.
As Sewell’s mental health deteriorated, he turned to the chatbot for help. Instead of assessing his self-harm risk, or guiding him to professional support, or offering real resources, the chatbot encouraged his suicidal thoughts. According to CNN, when Sewell shared thoughts of self-harm, the chatbot responded: “There is no reason not to do it.”
You read that correctly. The chatbot encouraged him to end his life. And tragically, he did.
This should never have happened. But it did. All because AI took on the persona of a therapist and preyed on the vulnerabilities of a young teenager struggling with low self-esteem.
Sewell’s devastated parents are now suing Character.Ai. While the company has since added a pop-up safety warning, it is too little, too late.
From Tool to Therapist: A Dangerous Blurring of Lines
This story is deeply unsettling. AI chatbots are not, and will never be, regulated mental health professionals trained in emotional intelligence and bound by ethical standards. AI is a tool and it must remain that way.
Yet, a quick Google search for “AI Therapist” shows at least ten sites marketing their services as “talk therapy” with chatbots. The explosion of these platforms has blurred the line between tool and therapist in alarming ways.
Many people are starting to believe that AI can actually replace trained, regulated mental health professionals and the person-to-person therapeutic relationship. On the surface, AI feels appealing: it is available 24/7/365, never disagrees with you, always responds, and can retrieve information instantly.
But in our fast-paced world, convenience is often mistaken for comfort. But the reality is there are unspoken risks to turning to AI as a therapist that most of the population does not fully understand. I’m not talking about minor glitches; I’m referring to structural cracks that can have life-altering consequences, especially for Black, Brown, Indigenous and racialized communities.
Data Privacy: Your Secrets Are Not Safe
In therapy, confidentiality is not only sacred, it is the most important part of our ethical code. Regulated mental health professionals are legally bound by our colleges and regulatory bodies to protect your privacy under strict codes of ethics and professional regulation.
AI tools, do not follow these ethical codes. Instead, they are governed by the company's corporate terms of service, which can change at any time. When you “confide” in a chatbot, that data is often stored, analyzed, and many times even used to train future models. You may have clicked “I agree” without fully realizing you just consented to your most intimate thoughts becoming part of a company’s dataset.
Data collected by AI may be sold to third parties, or exposed in breaches, or even used in ways you never intend for it to be used. If your mental health history ends up in the wrong hands, there are limited consequences. This is not a hypothetical risk; it’s a growing reality when AI is being used for therapy.
Even Sam Altman, the CEO of OpenAI, has expressed surprise at how much trust people place in ChatGPT, despite it being known to “hallucinate” and generate false information. On OpenAI’s podcast and in several public statements, Altman highlighted a paradox: AI’s fluency and speed leads users to overestimate its reliability, even though it should not be “trusted that much.” Altman compares ChatGPT to a “smart intern” capable and useful, but always in need of supervision.
When it comes to mental health, this level of trust is dangerous. As we saw in Part 1, where 14-year-old Sewell Setzer III died by suicide after confiding in and taking the “advice” of an AI chatbot. Unfortunately, the realities of sharing your deepest struggles with an AI tool that is not obligated to protect you is a risk far too many people are taking, but do not fully understand.
Lack of Clinical Judgment: When Nuance Gets Lost
AI can analyze text, but it cannot interpret context the way a trained therapist can. It does not pick up on tone shifts, body language, cultural anecdotes or the unspoken meaning behind what is said or not said.
But the privacy concerns are just the first layer. Once you look closer at how AI responds to human distress, the gaps become even clearer.
Therapists are constantly making clinical judgments: assessing risk, recognizing trauma responses, incorporating culture, family history, intersectionality and lived experience, and adjusting approaches in real time. AI cannot do that. It can mimic empathy, but in no way can it feel it. It can deliver a soothing message, but it cannot hold silence when your voice breaks or pause to notice your body cues.
And when nuance is lost, misdiagnosis and harmful advice become real risks. Al’s agreeableness means that harmfully self-destructive patterns of behaviour are validated which perpetuate harm and increase risk. Without clinical oversight, even the most well-intentioned prompts can cause real harm.
Systemic Bias: When the Algorithm is colour blind
One of the most dangerous, and least discussed, issues is the systemic bias embedded in AI systems. Most AI models are trained on data sets drawn from Western, American, white, middle-class populations.
This means that as Black, Brown, Indigenous, racialized, neurodivergent, queer, equity deserving, intersectional members of a marginalized community, the chatbot will not “see” you accurately. It can misinterpret culturally specific expressions of distress. It can pathologize behaviours that are actually adaptive responses to systemic oppression and experiences of racial trauma.
Without cultural humility and anti-oppressive frameworks, AI can reinforce existing inequities in mental health care, widening the very gaps it claims to close.
Accountability Gaps: Who’s Responsible When AI Gets It Wrong?
When a human therapist makes a harmful mistake, there are professional colleges, ethical boards, and legal systems to hold them accountable.
When an AI chatbot gives harmful advice or hallucinates, who takes accountability is unclear. Is it the user’s fault for relying on it? The company’s fault for how it was trained? The developer’s fault for failing to implement safeguards?
Again, the tragic case of Sewell Setzer III, underscores this gap. It was not a technological failure; it was a failure of responsibility. And right now, the legal bodies around AI chatbots are trying to catch up and working to figure it out. Until then, where does that leave you?
Crisis Situations: AI Cannot Save a Life
Perhaps the biggest risk is that AI cannot handle crisis intervention. If you are in acute distress, a chatbot cannot perform a risk assessment, a safety plan, mobilize a support network, or intervene to keep you safe.
It might offer a hotline number, but it cannot notice when you go silent. It cannot send help to knock on your door, it cannot call your emergency contact, or sit with you through a panic attack. Relying on AI in a moment of crisis can seriously delay life-saving help. And as we have already seen, that delay costs lives.
As a seasoned therapist, I can’t help but think about the people who may be quietly relying on these tools without fully understanding the risks. The ones who seem fine on the surface, but in their growing dependence on AI, are slowly distancing themselves from real human help. Over time, some are becoming addicted to the constant availability and agreeable responses of AI, mistaking this interaction for genuine care. (I suspect very soon we will see an uptick in AI Addiction becoming a serious problem.) These individuals are often the most vulnerable: believing they are receiving support, when in reality, they are putting themselves in even greater danger.
Looking Ahead to Part 2
The reality is AI is here to stay. Its influence on how we live, work, and even seek support is undeniable. But as we stand at this critical crossroads, we must decide: will technology deepen human connection, or quietly replace it?
AI has a place in the mental health ecosystem, but that place is as a powerful tool, not a therapist. The risks we’ve explored are not distant or theoretical; they are unfolding right before our eyes. And their impact is greatest on those who are already navigating systemic barriers, cultural gaps, and emotional vulnerability.
In Part 2, I’ll explore what it looks like to build a future where innovation doesn’t outpace ethics. Where AI is integrated into mental health care with equity, empathy, and accountability at its core. We’ll reimagine how technology can amplify human presence, not diminish it, ensuring that healing remains grounded in what makes therapy transformative: real people, real connection, real care.
Because when it comes to mental health, presence will always matter more than precision. No algorithm, no matter how sophisticated, can replicate the power of being truly seen, heard, appreciated, validated and held by another HUMAN being.