AI Health Guide
Menu

Is AI Therapy Safe? What the Evidence Actually Says

AI therapy apps are used by millions of people worldwide. But are they safe? Can an AI chatbot detect suicidal ideation? What happens to your most private conversations? This guide examines the evidence — both supportive and concerning.

E

Editorial Team

AI & Health Technology Researchers

Reviewed by our editorial team

Last updated:  ·  Report an error

The Short Answer

AI therapy tools can be safe and beneficial for mild-to-moderate anxiety, stress, and depression when used as supplements to — not replacements for — professional mental healthcare. However, they are not safe or appropriate for crisis situations, severe psychiatric conditions, active suicidal ideation, psychosis, or eating disorders. The evidence base is growing but uneven: some apps have 14 RCTs, others have none.

What the Clinical Evidence Shows

The strongest evidence exists for AI tools based on established psychotherapeutic frameworks — particularly Cognitive Behavioral Therapy (CBT). Both Headspace (via its Ebb AI companion) and Woebot (before its consumer shutdown in June 2025) demonstrated efficacy in randomized controlled trials for reducing symptoms of anxiety and depression.

Key findings from published research:

  • Woebot (14 RCTs): Demonstrated significant reduction in PHQ-9 depression scores in multiple trials. Founded by Stanford clinical psychologist. However, consumer app shut down June 2025.
  • Headspace (14 RCTs): Evidence supports stress reduction, anxiety reduction, and sleep improvement. Ebb AI companion trained in motivational interviewing methodology.
  • Wysa (JMIR publications, FDA Breakthrough Device): Independent peer-reviewed study confirmed efficacy. FDA Breakthrough Device Designation for chronic pain management with associated depression/anxiety.
  • BetterHelp: Published outcome studies for its therapy service, but no RCTs for the AI matching algorithm specifically. The therapy itself is delivered by licensed humans.
  • Calm (1 RCT): Significantly less clinical evidence than Headspace despite similar market positioning. Systematic reviews noted conflicts of interest in Calm-sponsored research.

Crisis Detection: The Critical Safety Feature

The most important safety question for any mental health AI is: can it detect when a user is in crisis and respond appropriately?

  • Headspace Ebb: Proprietary 7-type safety risk identification system monitoring 100% of messages. Categories include suicidal ideation, self-harm, and violence toward others. Escalation protocols to human clinical team.
  • Wysa: Crisis detection with human escalation protocols. Not designed as primary crisis intervention tool.
  • Woebot (legacy): Had crisis detection and safety monitoring, but scripted responses sometimes produced false safety flags that disrupted conversations for non-crisis users.
  • BetterHelp: Explicitly states it is not for crisis intervention. Users in crisis are directed to 988 and emergency services.

No AI therapy app should be relied upon as a primary crisis intervention tool. If you or someone you know is experiencing a mental health crisis, contact the 988 Suicide & Crisis Lifeline (call or text 988) or the Crisis Text Line (text HOME to 741741).

Privacy and Data Security Risks

Mental health data is among the most sensitive personal information. The track record of digital mental health companies on data privacy is mixed:

  • Cerebral: Settled with the FTC for $7 million (2024) after sharing user mental health data — including information about conditions and treatments — with Facebook, Google, and other advertising platforms without user consent.
  • BetterHelp: FTC investigation concerns around data handling and therapist licensing across state lines. Users have expressed fears that therapists may use AI chatbots to respond to messages.
  • Noom: Criticized for sharing user data with third parties including Facebook — similar concerns to the Cerebral case.
  • Wysa: Designed to be anonymous by default. Conversations are not linked to personally identifiable information. This is a genuine differentiator.

Before using any mental health app, review its privacy policy specifically for: how your conversation data is stored, whether it is shared with third parties for advertising, whether you can request deletion of your data, and whether the app is HIPAA-compliant (relevant for US users).

The Regulatory Landscape in 2026

The regulatory environment for digital mental health is tightening. Woebot's consumer shutdown in June 2025 was explicitly attributed to FDA regulatory burden — the company could not sustain the compliance costs of operating a consumer-facing mental health AI under evolving FDA guidance.

Wysa's FDA Breakthrough Device Designation represents the other path: regulatory engagement as a competitive advantage. Apps that pursue FDA pathways may offer greater safety assurance, but the process is expensive and time-consuming, potentially limiting innovation.

For consumers, this means: apps with FDA engagement (Wysa) or extensive RCT evidence (Headspace) offer more safety assurance than apps without clinical validation. The absence of clinical evidence does not mean an app is unsafe, but it does mean its safety profile is unknown.

When AI Therapy Is Appropriate

  • Daily stress management and mindfulness practice
  • Mild-to-moderate anxiety and depressive symptoms
  • Supplementing traditional therapy between sessions
  • Building coping skills (CBT, DBT, mindfulness techniques)
  • Sleep improvement and relaxation
  • Journaling and self-reflection prompts

When AI Therapy Is NOT Appropriate

  • Active suicidal ideation or self-harm — call 988 immediately
  • Severe depression or bipolar disorder — requires professional psychiatric assessment
  • Psychotic symptoms — requires immediate clinical evaluation
  • Eating disorders — requires specialized clinical team
  • Substance abuse disorders — requires specialized treatment programs
  • PTSD or complex trauma — requires trauma-informed professional care
  • Children and adolescents — most apps are designed for adults (exception: Wysa has adolescent anxiety modules)

Our Position

AI therapy tools are a net positive for mental health accessibility when they are clinically validated, transparent about limitations, and clear that they supplement rather than replace professional care. The best apps (Headspace, Wysa) have genuine clinical evidence supporting their use for mild-to-moderate conditions.

However, the mental health app market also contains apps with no clinical validation, poor privacy practices, and insufficient crisis detection capabilities. Not all mental health apps are created equal. Users should prioritize apps with published clinical evidence, transparent privacy policies, and clear crisis escalation protocols.