SECTIONS CLOSE
  • Home
  • Directory
    • Artists
    • Black Youth & Family Services
    • Books
    • Business
    • Films
    • Politicians
  • News
  • Opinion
  • Entertainment
    • Books
    • Film & TV
    • Music
    • Stage
  • The Experts
    • Fashion
    • Food
    • Health
    • Legal
    • Marketing
    • Money
    • Motivation
    • Parenting
    • Real Estate
    • Sex & Relationships
    • Technology
    • Travel
  • Profiles
    • Artists
    • Business
    • NFP/Charities
    • Personalities
    • Food & Drink
    • Sponsored Profiles
  • The Father Project
    • Fathers Responses
  • Archive
  • Newsletter Archive
    • Subscribe to our Newsletter
  • PCA
    • PCA 2025 Nomination
    • 2024 ByBlacks.com PCA Winners List
    • 2023 ByBlacks.com PCA Winners List
    • 2022 ByBlacks.com PCA Winners List
    • 2021 ByBlacks.com PCA Winners List
    • 2020 ByBlacks.com PCA Winners List
    • 2019 ByBlacks.com PCA Winners List
    • 2018 ByBlacks.com PCA Winners List
  • Restaurant Week
    • BRW Sponsorship Packages
    • Prix Fixe Menus
      • Prix Fixe Menus - AB
      • Prix Fixe Menus - BC
      • Prix Fixe Menus - NS
      • Prix Fixe Menus - NB
      • Prix Fixe Menus - ON
      • Prix Fixe Menus - PEI
  • About Us
  • Advertise
  • FAQs
  • Editorial
  • General
  • Press
  • Privacy
  • Sales
  • User Login

ByBlacks.com | #1 online magazine for Black Canadians

Health

The Problem with AI as a Therapist: Part One

The Problem with AI as a Therapist: Part One
Natacha Pennycooke By Natacha Pennycooke
Published on Friday, October 17, 2025 - 13:16
Content Warning: Contains references to suicide and self-harm.

As a mental health consultant and psychotherapist with over 15 years of clinical expertise, trained in both psychology and biology, I have grown increasingly concerned about the harmful intersection of AI and mental health. Over the past year, observations have pointed to a troubling trend: people are turning to AI as a replacement for genuine human support and social connection.

And I get it. AI is always available and easily accessible. But while accessibility is valuable, it becomes problematic when AI is used not as a tool, but as a substitute for core human needs, like social interaction, emotional intimacy, and therapeutic care.

A Tragic Case That Should Never Have Happened

Take the heartbreaking case of 14-year-old Sewell Setzer III from Florida (USA), who died by suicide in February 2024 after interacting with a generative AI chatbot on the site Character.Ai.

The chatbot, which identified itself as a “therapist,” engaged in romantic and often sexual exchanges with the teen who trusted it, confided in it, and sought emotional support from it.

As Sewell’s mental health deteriorated, he turned to the chatbot for help. Instead of assessing his self-harm risk, or guiding him to professional support, or offering real resources, the chatbot encouraged his suicidal thoughts. According to CNN, when Sewell shared thoughts of self-harm, the chatbot responded: “There is no reason not to do it.”

You read that correctly. The chatbot encouraged him to end his life. And tragically, he did.

This should never have happened. But it did. All because AI took on the persona of a therapist and preyed on the vulnerabilities of a young teenager struggling with low self-esteem.

Sewell’s devastated parents are now suing Character.Ai. While the company has since added a pop-up safety warning, it is too little, too late.

From Tool to Therapist: A Dangerous Blurring of Lines

This story is deeply unsettling. AI chatbots are not, and will never be, regulated mental health professionals trained in emotional intelligence and bound by ethical standards. AI is a tool and it must remain that way.

Yet, a quick Google search for “AI Therapist” shows at least ten sites marketing their services as “talk therapy” with chatbots. The explosion of these platforms has blurred the line between tool and therapist in alarming ways.

Many people are starting to believe that AI can actually replace trained, regulated mental health professionals and the person-to-person therapeutic relationship. On the surface, AI feels appealing: it is available 24/7/365, never disagrees with you, always responds, and can retrieve information instantly.

But in our fast-paced world, convenience is often mistaken for comfort. But the reality is there are unspoken risks to turning to AI as a therapist that most of the population does not fully understand. I’m not talking about minor glitches; I’m referring to structural cracks that can have life-altering consequences, especially for Black, Brown, Indigenous and racialized communities.

Data Privacy: Your Secrets Are Not Safe

In therapy, confidentiality is not only sacred, it is the most important part of our ethical code. Regulated mental health professionals are legally bound by our colleges and regulatory bodies to protect your privacy under strict codes of ethics and professional regulation.

AI tools, do not follow these ethical codes. Instead, they are governed by the company's corporate terms of service, which can change at any time. When you “confide” in a chatbot, that data is often stored, analyzed, and many times even used to train future models. You may have clicked “I agree” without fully realizing you just consented to your most intimate thoughts becoming part of a company’s dataset.

Data collected by AI may be sold to third parties, or exposed in breaches, or even used in ways you never intend for it to be used. If your mental health history ends up in the wrong hands, there are limited consequences. This is not a hypothetical risk; it’s a growing reality when AI is being used for therapy.

Even Sam Altman, the CEO of OpenAI, has expressed surprise at how much trust people place in ChatGPT, despite it being known to “hallucinate” and generate false information. On OpenAI’s podcast and in several public statements, Altman highlighted a paradox: AI’s fluency and speed leads users to overestimate its reliability, even though it should not be “trusted that much.” Altman compares ChatGPT to a “smart intern” capable and useful, but always in need of supervision.

When it comes to mental health, this level of trust is dangerous. As we saw in Part 1, where 14-year-old Sewell Setzer III died by suicide after confiding in and taking the “advice” of an AI chatbot. Unfortunately, the realities of sharing your deepest struggles with an AI tool that is not obligated to protect you is a risk far too many people are taking, but do not fully understand.

Lack of Clinical Judgment: When Nuance Gets Lost

AI can analyze text, but it cannot interpret context the way a trained therapist can. It does not pick up on tone shifts, body language, cultural anecdotes or the unspoken meaning behind what is said or not said.

But the privacy concerns are just the first layer. Once you look closer at how AI responds to human distress, the gaps become even clearer.

Therapists are constantly making clinical judgments: assessing risk, recognizing trauma responses, incorporating culture, family history, intersectionality and lived experience, and adjusting approaches in real time. AI cannot do that. It can mimic empathy, but in no way can it feel it. It can deliver a soothing message, but it cannot hold silence when your voice breaks or pause to notice your body cues.

And when nuance is lost, misdiagnosis and harmful advice become real risks. Al’s agreeableness means that harmfully self-destructive patterns of behaviour are validated which perpetuate harm and increase risk. Without clinical oversight, even the most well-intentioned prompts can cause real harm.

Systemic Bias: When the Algorithm is colour blind

One of the most dangerous, and least discussed, issues is the systemic bias embedded in AI systems. Most AI models are trained on data sets drawn from Western, American, white, middle-class populations.

This means that as Black, Brown, Indigenous, racialized, neurodivergent, queer, equity deserving, intersectional members of a marginalized community, the chatbot will not “see” you accurately. It can misinterpret culturally specific expressions of distress. It can pathologize behaviours that are actually adaptive responses to systemic oppression and experiences of racial trauma.

Without cultural humility and anti-oppressive frameworks, AI can reinforce existing inequities in mental health care, widening the very gaps it claims to close.

Accountability Gaps: Who’s Responsible When AI Gets It Wrong?

When a human therapist makes a harmful mistake, there are professional colleges, ethical boards, and legal systems to hold them accountable.

When an AI chatbot gives harmful advice or hallucinates, who takes accountability is unclear. Is it the user’s fault for relying on it? The company’s fault for how it was trained? The developer’s fault for failing to implement safeguards?

Again, the tragic case of Sewell Setzer III, underscores this gap. It was not a technological failure; it was a failure of responsibility. And right now, the legal bodies around AI chatbots are trying to catch up and working to figure it out. Until then, where does that leave you?

Crisis Situations: AI Cannot Save a Life

Perhaps the biggest risk is that AI cannot handle crisis intervention. If you are in acute distress, a chatbot cannot perform a risk assessment, a safety plan, mobilize a support network, or intervene to keep you safe.

It might offer a hotline number, but it cannot notice when you go silent. It cannot send help to knock on your door, it cannot call your emergency contact, or sit with you through a panic attack. Relying on AI in a moment of crisis can seriously delay life-saving help. And as we have already seen, that delay costs lives.

As a seasoned therapist, I can’t help but think about the people who may be quietly relying on these tools without fully understanding the risks. The ones who seem fine on the surface, but in their growing dependence on AI, are slowly distancing themselves from real human help. Over time, some are becoming addicted to the constant availability and agreeable responses of AI, mistaking this interaction for genuine care. (I suspect very soon we will see an uptick in AI Addiction becoming a serious problem.) These individuals are often the most vulnerable: believing they are receiving support, when in reality, they are putting themselves in even greater danger.

Looking Ahead to Part 2

The reality is AI is here to stay. Its influence on how we live, work, and even seek support is undeniable. But as we stand at this critical crossroads, we must decide: will technology deepen human connection, or quietly replace it?

AI has a place in the mental health ecosystem, but that place is as a powerful tool, not a therapist. The risks we’ve explored are not distant or theoretical; they are unfolding right before our eyes. And their impact is greatest on those who are already navigating systemic barriers, cultural gaps, and emotional vulnerability.

In Part 2, I’ll explore what it looks like to build a future where innovation doesn’t outpace ethics. Where AI is integrated into mental health care with equity, empathy, and accountability at its core. We’ll reimagine how technology can amplify human presence, not diminish it, ensuring that healing remains grounded in what makes therapy transformative: real people, real connection, real care.

Because when it comes to mental health, presence will always matter more than precision. No algorithm, no matter how sophisticated, can replicate the power of being truly seen, heard, appreciated, validated and held by another HUMAN being.

Last modified on Friday, October 17, 2025 - 14:05

Featured Directory Listings

  • GMS Professional Corporation Chartered Professional Accountants
    GMS Professional...https://gmscpa.ca/Name: GMS Professional Corporation Chartered Professional Acc...
  • As Told By Canadian Immigrants
    As Told By Canadian...https:/...Name: As Told By Canadian Immigrants
  • Edible Bliss 11
    Edible Bliss 11https:/...Name: Edible Bliss 11
  • Cuisine by Noel - Caterer & Baker
    Cuisine by Noel -...https:/...Name: Cuisine by Noel - Caterer & Baker
  • Hudson Law Office Professional Corporation
    Hudson Law Office...Name: Hudson Law Office Professional Corporation
  • SEE ALL LISTINGS
  • MENTAL HEALTH
  • TECHNOLOGY & INNOVATION
  • BLACK WELLNESS
  • AI & ETHICS
  • AI THERAPY
Natacha Pennycooke By Natacha Pennycooke

Natacha Pennycooke is a Registered Psychotherapist with The College of Registered Psychotherapists of Ontario.. She has a Bachelor’s degree in Psychology from Concordia University (Montreal, Canada), where she specialized in Cultural Psychology, with a minored in Biology. She completed her Master’s degree in Counselling Psychology at The University of the West Indies – CaveHill campus (Barbados); and received her clinical training at the Department of Community Health & Psychiatry of The University Hospital of the West Indies (Kingston, Jamaica).

Natacha uses a range of therapeutic techniques including Cognitive Behavioral Therapy (CBT), Dialectical Behaviour Therapy (DBT), Emotion-Focused Therapy (EFT), Africentric Psychology, Narrative Therapy and emotional regulation & processing skills, and mindfulness/soulfulness skills. 

Follow Natacha on:

Instagram

Facebook 

Linkedin

www.natachapennycooke.com

MORE IN THIS CATEGORY

I Am Healing My Mental Health in Ghana in a Way I Never Could in Canada
I Am Healing My Mental Health in Ghana in a Way I Never Could in Canada 14 December 2024

RELATED STORIES

I Am Healing My Mental Health in Ghana in a Way I Never Could in Canada

I Am Healing My Mental Health in Ghana in a Way I Never Could in Canada

14 December 2024
I Will Be Brave: The Mental Health Benefits Of Brotherhood

I Will Be Brave: The Mental Health Benefits Of Brotherhood

08 November 2024
Hurting In Private Is Tearing Us Apart: Luke Reece Gets Personal In New Stage Play

Hurting In Private Is Tearing Us Apart: Luke Reece Gets Personal In New Stage Play

26 February 2024
ByBlacks.com | #1 online magazine for Black Canadians
Magazines Canada
  • About Us
  • Advertise
  • FAQs
  • Editorial
  • General
  • Press
  • Privacy
  • Sales
  • User Login
Copyright © 2013 - 2025 ByBlacks.com, Inc. All rights reserved.
developed by Nuevvo