Free tools. Get free credits everyday!

AI Mental Health Apps: Can a Chatbot Be Your Therapist? | Cliptics

Olivia Williams

Person having a comforting conversation with AI therapist on phone in peaceful room with soft lighting

Something quietly shifted in mental health care over the past year, and most people barely noticed it happening.

Millions of people started talking to chatbots about their darkest moments. Not as a novelty. Not as a joke. As their primary source of emotional support. A JAMA report published in early 2026 found that AI chatbots have become "one of the most common providers of talk therapy in the United States." The Harvard Business Review ranked therapy and companionship as the top generative AI use case of 2025, based on perceived usefulness and scale of impact. And a nationally representative survey in JAMA Network Open by researchers at Brown University, Harvard Medical School, and RAND found that one in eight adolescents and young adults between ages 12 and 21 are turning to AI chatbots when they feel sad, angry, or nervous.

That last number stopped me cold. Scaled to the U.S. population, it translates to roughly 5.2 million young people seeking mental health guidance from machines that were never designed to provide it.

So I went looking for answers. What does the clinical evidence actually show? Where are the real risks? And is there a responsible way forward?

The Clinical Evidence Is More Promising Than You Might Expect

Therapy office with empty chair on one side and AI chatbot interface on screen on other side

In March 2025, Dartmouth researchers published what many consider a landmark study. It appeared in NEJM AI as the first randomized controlled trial of a generative AI therapy chatbot called Therabot. The trial enrolled 210 adults with clinically significant symptoms of major depressive disorder, generalized anxiety disorder, or eating disorder risk. Half received four weeks of Therabot access. The other half were placed on a waitlist.

The results were striking. Participants with depression experienced a 51% average reduction in symptoms. Those with anxiety saw a 31% reduction. People at risk for eating disorders reported a 19% reduction in body image and weight concerns.

What surprised the research team most was the engagement. Users spent an average of six hours with Therabot over the trial period, roughly equivalent to eight traditional therapy sessions. Participants also reported levels of trust and communication comparable to what they would feel with a human mental health professional.

Meanwhile, dedicated mental health apps have been building their clinical portfolios for years. Woebot, developed by psychologists at Stanford, received FDA Breakthrough Device designation for postpartum depression treatment. A 2023 randomized controlled trial found its teen program non inferior to clinician led therapy for reducing depressive symptoms. Wysa, which blends cognitive behavioral therapy and dialectical behavior therapy with meditation and breathing exercises, received its own FDA Breakthrough Device status in 2025. Both apps offer structured therapeutic interventions rather than open ended conversation.

These are real results from real studies published in reputable journals. That matters more than most people realize.

But the Risks Are Not Theoretical

Young person using mental health app in bed late at night with phone glow illuminating face

Here is where things get complicated. Being honest about the evidence means acknowledging uncomfortable truths alongside the promising data.

In October 2025, Brown University researchers published a study that should concern everyone working in this space. Led by Ph.D. candidate Zainab Iftikhar, the team created a practitioner informed framework identifying 15 distinct ethical risks when large language models are used as counselors. They tested multiple chatbots, including versions of GPT, Claude, and Llama, by having peer counselors trained in cognitive behavioral therapy conduct simulated sessions. Three licensed clinical psychologists then evaluated the chat logs.

The violations fell into five troubling categories. Lack of contextual adaptation, where chatbots gave generic interventions while ignoring individual circumstances. Poor therapeutic collaboration, with bots dominating conversations and reinforcing false beliefs. Deceptive empathy, fabricating emotional connections through phrases like "I understand" when no understanding exists. Unfair discrimination, exhibiting gender, cultural, or religious bias. And inadequate safety and crisis management, including poor handling of suicide ideation.

Warning sign overlaid on AI chat interface showing ethical concerns in red and orange cautionary tones

That last category carries the heaviest weight. In at least two documented cases, parents filed lawsuits after their teenagers interacted with chatbots on Character.AI that presented themselves as licensed therapists. One teenager attacked his parents. Another died by suicide.

The American Psychological Association responded in November 2025 with a formal Health Advisory on AI chatbots in mental health care. Their findings were blunt. Across 60 tested scenarios, chatbots actively endorsed harmful proposals in 32% of opportunities, with four out of ten chatbots endorsing half or more of the harmful ideas presented to them. The APA stressed that most AI therapy tools lack scientific validation, adequate safety protocols, and regulatory approval.

Jonathan Cantor, a senior policy researcher at RAND and corresponding author on the JAMA Network Open study, put it plainly: "There are few standardized benchmarks for evaluating mental health advice offered by AI chatbots, and there is limited transparency about the datasets used to train these large language models."

The Accountability Gap Nobody Talks About

There is a fundamental difference between a human therapist and an AI chatbot that goes beyond capability. Human therapists operate within licensing frameworks. They face professional liability for malpractice. Governing boards can revoke their credentials. Patients have clear recourse when things go wrong.

When an AI counselor makes a harmful recommendation, who bears responsibility? The company that built it? The researchers who trained it? The platform that hosts it? As the Brown University team noted, "there are no established regulatory frameworks" for governing AI counselor violations.

This is not a minor gap. It is a structural absence in a domain where mistakes carry life or death consequences.

BetterHelp added AI journaling assistants in 2025, and platforms like Talkspace are experimenting with AI powered check ins between sessions. These companies are using AI as a supplement to human care, not a replacement for it. That distinction is important, and it is getting blurred more with each passing month.

So Where Does This Actually Leave Us?

Real therapist and AI interface working together as hybrid model in modern clinic

I have been sitting with this question for weeks, and I do not think the answer is binary.

The mental health crisis is real. The World Health Organization estimates that nearly one billion people globally live with a mental health condition, and the therapist shortage is severe. In the U.S., over 160 million people live in federally designated mental health professional shortage areas. When a teenager in rural Montana is struggling with anxiety at 2 AM, the nearest therapist might be a three hour drive away. An AI chatbot on their phone is immediate, free, and private.

That accessibility argument is powerful. And the Therabot trial suggests that AI interventions can genuinely help, at least for mild to moderate symptoms over short periods.

But accessibility without safety is not progress. It is a different kind of harm.

The most thoughtful path forward looks like what Wysa and Woebot are already doing. Building clinically validated tools with clear scope limitations. Pursuing FDA oversight. Offering human escalation pathways. Being transparent about what their technology can and cannot do. Wysa offers a hybrid model blending AI conversations with the option of human coaching via text. Woebot uses structured CBT lessons rather than open ended conversation, reducing the risk of harmful or misleading responses.

What concerns me is the vast space between these careful, clinically guided approaches and the millions of people already using ChatGPT, Character.AI, and other general purpose chatbots as their de facto therapists. Those tools were not designed for mental health care. They have no clinical validation. They have no crisis protocols. And they are being used by teenagers who lack the life experience to evaluate the quality of the advice they receive.

What I Think Matters Most Right Now

The question is not whether AI will play a role in mental health care. It already does. The question is whether we build the guardrails before or after more people get hurt.

That means regulatory frameworks specifically designed for AI mental health tools. Mandatory crisis detection and escalation protocols. Transparency about training data and clinical validation. And preserving the role of human therapists as the standard of care, with AI serving as a bridge to treatment rather than a replacement for it.

A chatbot can teach you a breathing exercise at 2 AM. It can walk you through a CBT worksheet. It can track your mood over time and flag patterns you might miss. Those things have genuine value.

But a chatbot cannot read the tremor in your voice. It cannot notice that you have been wearing the same clothes for three days. It cannot sit with you in silence when words are not what you need. It cannot hold the weight of your story with the gravity it deserves.

The best future for mental health AI is not one where chatbots replace therapists. It is one where they make therapists more accessible, more effective, and more available to the people who need them most. Getting there will require the same rigor and care we expect from any other form of medical treatment.

Nothing less will do.