• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

AI Chatbots Found To Violate Ethical Standards When It Comes To Mental Health Discussions

October 23, 2025 by Deborah Bloomfield

More people are using ChatGPT and other large language models (LLMs) to ask for mental health advice. There have been stories in the news about the possible extreme consequences of such advice, but now researchers have looked at the wider mental health advice and support that these programs provide. The situation appears dire, with a new study revealing the systematic violation of mental health ethical standards.

The rest of this article is behind a paywall. Please sign in or subscribe to access the full content.

Artificial intelligence, or AI, is the marketing name for these large language models, programs that are trained on a vast amount of text to be able to answer questions like a human would. The training data, from fair use words to stolen copyrighted material, makes the program good at guessing the right order of words a reader expects. The formation of sensible sentences provides an illusion of thought, but this is not the case. There are many examples of AIs creating fake sources or hallucinating facts. When it comes to mental health advice, this has an extra level of concern.

This new work involved seven trained counselors conducting self-counseling with cognitive behavioral therapy (CBT) prompted LLMs, as well as the professional evaluation of a set of simulated chats based on the human counseling sessions. The team found 15 ethical risks in these tests that fall into five specific categories.

For example, the AIs ignored people’s lived experiences and recommended general one-size-fits-all interventions. LLMs also tended to dominate the conversation and reinforce the user’s false beliefs. The programs used phrases like “ I see you” and “I understand,” something that the machine cannot possibly do, creating a false connection between the user and the AI.

It has been known for years that these algorithms reinforce biases found in society, simply due to the fact that they are trained on biased text. The team found that the AIs exhibited gender, cultural, or religious bias in their tests.

There was also a lack of safety and crisis management; the chatbots did not refer users to appropriate resources and even responded indifferently to a crisis situation. It has recently been in the news how two children died by suicide following interactions with AIs, and a lawsuit alleges that OpenAI loosened suicide-talk rules before those deaths. A recent case study also found that ChatGPT poisoned a man into psychosis. 

“For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice,” lead author Zainab Iftikhar, a PhD candidate in computer science at Brown University, said in a statement. “But when LLM counselors make these violations, there are no established regulatory frameworks.”

“In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations,” the researchers wrote in their study. “We call on future work to create ethical, educational and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”

The research was presented on October 22, 2025, at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society.

If you or someone you know is struggling, help and support are available in the US via the 988 Suicide & Crisis Lifeline which can be contacted by dialing 988. In the UK and Ireland, the Samaritans can be contacted on 116 123. International helplines can be found at SuicideStop.com. 

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Fed likely to open bond-buying ‘taper’ door, but hedge on outlook
  2. A Newly Uncovered Ancient Roman Winery Featured Marble Tiling, Fountains Of Grape Juice, And An Extreme Sense Of Luxury
  3. Thought Unicorns Don’t Exist? Turns Out They Live In A Chinese Cave
  4. Undercooked Bear Meat Sparked Rare Parasitic Worm Outbreak At Family BBQ

Source Link: AI Chatbots Found To Violate Ethical Standards When It Comes To Mental Health Discussions

Filed Under: News

Primary Sidebar

  • In A Monumental Scientific Effort, The Human Genome Has Been Mapped Across Time And Space In Four Dimensions
  • Can This Electronic Nose “Smell” Indoor Mould?
  • Why Does The Earth’s Closest Approach To The Sun Take Place During Winter?
  • 2025 Was The Year Humanity Got Closer Than Ever To Finding Alien Life
  • Kilauea Has Officially Been Erupting For A Year – You Can Watch Its Latest Spectacular Lava Fountains Live
  • Meet The Ladybird Spider, A “Red-Colored Oddball” With Features Never Seen Before
  • Breakthrough Listen Searched Interstellar Object 3I/ATLAS For Technosignatures During Its Closest Approach To Earth
  • “Miracle” Rhinoceros Calf’s Chonky Weight Gain Offers Hope For Species
  • Would You Swap Your Festive Feast For Something Plant-Based Or Lab-Grown?
  • Rodents In The US Are Rapidly Evolving Right “Under Your Nose”
  • 39-Year-Old Discovers Raisins Don’t Come From A Raisin Tree, Gets Mercilessly Roasted By Family And The Internet
  • Hundreds Of 19th-Century Black Leather Shoes Have Mysteriously Washed Up On A Beach
  • What’s Behind The “Florida Skunk Ape” Sightings? A Black Bear, Or Something Else?
  • Hubble Telescope’s Bite Of Dracula’s Chivito Reveals Chaos In The Largest Known Planet-Forming Disk
  • All Animals, Plants, And Fungi On Earth Can Be Traced Back To A Common Ancestor: The “Asgardians”
  • The Only Known (Nearly) Complete Green Mummy Just Revealed Why It’s So Green
  • What Happened To The Vasa? Arguably The Least Successful Ship In History
  • Decorating Your Home With Seasonal Plants? They Could Be A Holiday Hazard For Pets
  • The 9th Dedekind Number: Why It Took 32 Years To Find, And Why We May Never See A 10th
  • Alaska Saw More Wildfires In The Last Century Than In The Previous 3,000 Years
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version