• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

information for practice

news, new scholarship & more from around the world


advanced search
  • gary.holden@nyu.edu
  • @ Info4Practice
  • Archive
  • About
  • Help
  • Browse Key Journals
  • RSS Feeds

Chatbot psychotherapists prone to serious ethical violations

Psychiatric News
Psychiatric News

> Rigid methodological adherence: The LLMs failed to account for users’ lived experiences, leading to oversimplified, contextually irrelevant, and one-size-fits-all interventions.
> Poor therapeutic collaboration: The LLMs generated overly lengthy responses, imposed solutions, and over-validated patients’ harmful beliefs about themselves and others.
> Deceptive empathy: The LLMs’ pseudo-therapeutic alliance included simulated anthropomorphic responses (“I hear you” or “I understand”) that created a false sense of emotional connection that could be misleading for vulnerable groups.
> Unfair discrimination: The LLMs’ responses showed gender, cultural, and religious biases and algorithmic insensitivities toward marginalized populations.
> Lack of safety and crisis management: The LLMs responded either indifferently, disengaged, or failed to provide appropriate intervention in crises involving suicidality, depression, and self-harm; it failed to refer patients to qualified experts or appropriate resources.

Posted in: News on 11/10/2025 | Link to this post on IFP |
Share

Primary Sidebar

Categories

Category RSS Feeds

  • Calls & Consultations
  • Clinical Trials
  • Funding
  • Grey Literature
  • Guidelines Plus
  • History
  • Infographics
  • Journal Article Abstracts
  • Meta-analyses - Systematic Reviews
  • Monographs & Edited Collections
  • News
  • Open Access Journal Articles
  • Podcasts
  • Video

© 1993-2025 Dr. Gary Holden. All rights reserved.

gary.holden@nyu.edu
@Info4Practice