Site icon Medical Market Report

ChatGPT Appears More Empathetic Than Human Doctors When Responding To Patients

ChatGPT may actually be more empathetic than human doctors, according to a new study looking at patient ratings. While many people would assume that AI would deliver uncaring, factual advice, when confronted with healthcare problems, it actually was rated better than real doctors when it came to tact. 

The idea of using AI as a way to make healthcare accessible to all has been floated many times now that language models have demonstrated impressive accuracy, but the question has always arisen – do they have the necessary amount of empathy to be in a patient-facing role? Medicine requires people skills, taking into account cultural and social contexts, tasks that language models are famously terrible at.  

Advertisement

But do people actually like dealing with AI as a healthcare “professional”? One study sought to find out.  

Researchers at the University of California San Diego took a sample of 195 randomly drawn patient questions from Reddit, which all had a verified physician answering the questions. The team then fed those same questions into ChatGPT and gathered its responses, before randomizing them with the original human answers. This group of randomized answers was given to licensed healthcare professionals to be rated on the accuracy of the information, which answers were better, and how empathetic the answers were (how good the “bedside manner”). 

Shockingly, the evaluators preferred ChatGPT’s answers 78.6 percent of the time when compared to those from physicians, with the responses considered of a higher quality and often being much longer in length. The difference between them was staggering – the proportion of responses considered “good” or “very good” quality was around 80 percent for the chatbot, while it was just 22 percent for physicians.  

When it came to empathy, the chatbot continued to outperform the physicians, with 45 percent of ChatGPT’s answers considered “empathetic” or “very empathetic”, while just 4.6 percent of the physicians’ answers were considered the same. 

Advertisement

The results showed ChatGPT was an extremely effective online healthcare assistant, but it comes with its own built-in problems. Firstly, the responses were taken from an online forum, where physicians are answering in their free time and are completely detached from the person. There is a good chance this results in stunted and impersonal responses, which could account for some of the empathy differences. 

Also, ChatGPT is simply a very effective way to scour and relay online information, but it cannot think or logically reason. Physicians can be faced with new cases that are outside of the current understanding of previous case studies, which would likely lead to ChatGPT faltering when it does not have solid foundation information. 

It is therefore possible that while ChatGPT might not be the best single point of contact with healthcare, it could be an excellent way to escalate cases and prioritize workloads for already swamped physicians. The researchers suggest it could draft responses and then physicians could edit them to get the best of both worlds. One thing is for sure – AI is coming to healthcare, whether we like it or not. 

The study is published in the journal JAMA Internal Medicine.

Source Link: ChatGPT Appears More Empathetic Than Human Doctors When Responding To Patients

Exit mobile version