Site icon Medical Market Report

Controversial Experiment Saw Mental Health Support Provided Using AI

An experiment that saw mental health support provided to about 4,000 humans using an artificial intelligence (AI) chatbot has been met with severe criticism online, over concerns about informed consent.

On Friday, Rob Morris, co-founder of the social media app Koko, announced the results of an experiment his company had run using GPT-3.  

Advertisement

Koko allows people to volunteer their problems to other users, who then try to help them “rethink”  the situation, in what has been likened to a form of cognitive behavioral therapy. For the experiment, Koko users could opt to have an AI helper compose responses to human users, which they could then use (or modify or replace if necessary). 

“Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own (p < .001). Response times went down 50%, to well under a minute," Morris wrote on Twitter.

“And yet… we pulled this from our platform pretty quickly. Why?” he added. “Once people learned the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty.”

Advertisement

While he went on to suggest that this could be because language models – essentially really, really good autocomplete – do not have human lived experience and so their responses come across as inauthentic, a lot of people focused on whether the participants had provided informed consent.

In a later clarification, Morris stressed that they “were not pairing people up to chat with GPT-3, without their knowledge”.

People have pointed out that it is not clear how the statement “everyone knew about the feature” fits with the claim “once people learned the messages were co-written by a machine, it didn’t work”. 

Morris told Insider that the study was “exempt” from informed consent law, pointing to a previous study by the company which had also been exempt, and that “every individual has to provide consent to use the service”.

“This imposed no further risk to users, no deception, and we don’t collect any personally identifiable information or personal health information (no email, phone number, ip, username, etc),” he added.

A better look at the methodology would help to clarify when informed consent was given and when the participants learned that responses could have been created by (human-supervised) AI. However, it is unclear at this stage whether the data will be published, with Morris now describing it to Insider as not a university study, “just a product feature explored”.

Source Link: Controversial Experiment Saw Mental Health Support Provided Using AI

Exit mobile version