ChatGPT Beats Doctors in Compassion and Quality of Advice to Patients

But don't replace your doctor just yet. Physicians working with chatbot technology could revolutionize healthcare

Summary: Researchers compared written responses from physicians and ChatGPT to real-world health questions and found that a panel of licensed healthcare professionals preferred ChatGPT’s responses 79% of the time, rating ChatGPT’s responses as higher quality and more empathetic.

While AI assistants like ChatGPT won’t replace doctors, the study suggests that physicians working together with such technologies may revolutionize medicine.

Key Facts:

  1. A study compared the responses of physicians and ChatGPT to real-world health questions and found that ChatGPT’s responses were preferred by a panel of licensed healthcare professionals 79% of the time and rated as higher quality and more empathetic.
  2. The study obtained a diverse sample of healthcare questions and physician answers from the social media platform Reddit’s AskDocs, where millions of patients publicly post medical questions to which doctors respond.
  3. The study suggests that integrating AI assistants like ChatGPT into healthcare messaging could improve workflow, impact patient health, eliminate health disparities suffered by minority populations, and assist doctors in delivering higher quality and more efficient care.

Source: UCSD

There has been widespread speculation about how advances in artificial intelligence (AI) assistants like ChatGPT could be used in medicine. 

A new study published in JAMA Internal Medicine led by Dr. John W. Ayers from the Qualcomm Institute within the University of California San Diego provides an early glimpse into the role that AI assistants could play in medicine.

The study compared written responses from physicians and those from ChatGPT to real-world health questions. A panel of licensed healthcare professionals preferred ChatGPT’s responses 79% of the time and rated ChatGPT’s responses as higher quality and more empathetic. 

“The opportunities for improving healthcare with AI are massive,” said Ayers, who is also vice chief of innovation in the UC San Diego School of Medicine Division of Infectious Disease and Global Public Health. “AI-augmented care is the future of medicine.” 

Is ChatGPT Ready for Healthcare?

In the new study, the research team set out to answer the question: Can ChatGPT respond accurately to questions patients send to their doctors? If yes, AI models could be integrated into health systems to improve physician responses to questions sent by patients and ease the ever-increasing burden on physicians.

“ChatGPT might be able to pass a medical licensing exam,” said study co-author Dr. Davey Smith, a physician-scientist, co-director of the UC San Diego Altman Clinical and Translational Research Institute and professor at the UC San Diego School of Medicine, “but directly answering patient questions accurately and empathetically is a different ballgame.” 

“The COVID-19 pandemic accelerated virtual healthcare adoption,” added study co-author Dr. Eric Leas, a Qualcomm Institute affiliate and assistant professor in the UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science.

“While this made accessing care easier for patients, physicians are burdened by a barrage of electronic patient messages seeking medical advice that have contributed to record-breaking levels of physician burnout.”

Designing a Study to Test ChatGPT in a Healthcare Setting

To obtain a large and diverse sample of healthcare questions and physician answers that did not contain identifiable personal information, the team turned to social media where millions of patients publicly post medical questions to which doctors respond: Reddit’s AskDocs. 

r/AskDocs is a subreddit with approximately 452,000 members who post medical questions and verified healthcare professionals submit answers. While anyone can respond to a question, moderators verify healthcare professionals’ credentials and responses display the respondent’s level of credentials.

The result is a large and diverse set of patient medical questions and accompanying answers from licensed medical professionals.

While some may wonder if question-answer exchanges on social media are a fair test, team members noted that the exchanges were reflective of their clinical experience. 

The team randomly sampled 195 exchanges from AskDocs where a verified physician responded to a public question. The team provided the original question to ChatGPT and asked it to author a response.

A panel of three licensed healthcare professionals assessed each question and the corresponding responses and were blinded to whether the response originated from a physician or ChatGPT. They compared responses based on information quality and empathy, noting which one they preferred.

The panel of healthcare professional evaluators preferred ChatGPT responses to physician responses 79% of the time. 

“ChatGPT messages responded with nuanced and accurate information that often addressed more aspects of the patient’s questions than physician responses,” said Jessica Kelley, a nurse practitioner with San Diego firm Human Longevity and study co-author.   

Additionally, ChatGPT responses were rated significantly higher in quality than physician responses: good or very good quality responses were 3.6 times higher for ChatGPT than physicians (physicians 22.1% versus ChatGPT 78.5%). The responses were also more empathic: empathetic or very empathetic responses were 9.8 times higher for ChatGPT than for physicians (physicians 4.6% versus ChatGPT 45.1%). 

“I never imagined saying this,” added Dr. Aaron Goodman, an associate clinical professor at UC San Diego School of Medicine and study coauthor, “but ChatGPT is a prescription I’d like to give to my inbox. The tool will transform the way I support my patients.”

Harnessing AI Assistants for Patient Messages  

“While our study pitted ChatGPT against physicians, the ultimate solution isn’t throwing your doctor out altogether,” said Dr. Adam Poliak, an assistant professor of Computer Science at Bryn Mawr College and study co-author. “Instead, a physician harnessing ChatGPT is the answer for better and empathetic care.”

This shows a chat screen with a doctor and patient on a chatbot page
The panel of healthcare professional evaluators preferred ChatGPT responses to physician responses 79% of the time. Credit: Neuroscience News

“Our study is among the first to show how AI assistants can potentially solve real world healthcare delivery problems,” said Dr. Christopher Longhurst, Chief Medical Officer and Chief Digital Officer at UC San Diego Health. “These results suggest that tools like ChatGPT can efficiently draft high quality, personalized medical advice for review by clinicians, and we are beginning that process at UCSD Health.” 

Dr. Mike Hogarth, a physician-bioinformatician, co-director of the Altman Clinical and Translational Research Institute at UC San Diego, professor in the UC San Diego School of Medicine and study co-author, added, “It is important that integrating AI assistants into healthcare messaging be done in the context of a randomized controlled trial to judge how the use of AI assistants impact outcomes for both physicians and patients.”  

In addition to improving workflow, investments into AI assistant messaging could impact patient health and physician performance. 

Dr. Mark Dredze, the John C Malone Associate Professor of Computer Science at Johns Hopkins and study co-author, noted: “We could use these technologies to train doctors in patient-centered communication, eliminate health disparities suffered by minority populations who often seek healthcare via messaging, build new medical safety systems, and assist doctors by delivering higher quality and more efficient care.” 

Summary and key facts generated with the assistance of ChatGPT AI technology

About this AI, ChatGPT and medicine research news

Author: Mika Ono
Source: UCSD
Contact: Mika Ono – UCSD
Image: The image is credited to Neuroscience News via Dall-E 2

Original Research: Open access.
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum‘ by John W. Ayers et al. JAMA Internal Medicine


Abstract

Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum

Importance  

The rapid expansion of virtual health care has caused a surge in patient messages concomitant with more work and burnout among health care professionals. Artificial intelligence (AI) assistants could potentially aid in creating answers to patient questions by drafting responses that could be reviewed by clinicians.

Objective  

To evaluate the ability of an AI chatbot assistant (ChatGPT), released in November 2022, to provide quality and empathetic responses to patient questions.

Design, Setting, and Participants  

In this cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question.

Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) on December 22 and 23, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals.

Evaluators chose “which response was better” and judged both “the quality of information provided” (very poorpooracceptablegood, or very good) and “the empathy or bedside manner provided” (not empatheticslightly empatheticmoderately empatheticempathetic, and very empathetic). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians.

Results  

Of the 195 questions and responses, evaluators preferred chatbot responses to physician responses in 78.6% (95% CI, 75.0%-81.8%) of the 585 evaluations. Mean (IQR) physician responses were significantly shorter than chatbot responses (52 [17-62] words vs 211 [168-245] words; t = 25.4; P < .001).

Chatbot responses were rated of significantly higher quality than physician responses (t = 13.3; P < .001). The proportion of responses rated as good or very good quality (≥ 4), for instance, was higher for chatbot than physicians (chatbot: 78.5%, 95% CI, 72.3%-84.1%; physicians: 22.1%, 95% CI, 16.4%-28.2%;). This amounted to 3.6 times higher prevalence of good or very good quality responses for the chatbot.

Chatbot responses were also rated significantly more empathetic than physician responses (t = 18.9; P < .001). The proportion of responses rated empathetic or very empathetic (≥4) was higher for chatbot than for physicians (physicians: 4.6%, 95% CI, 2.1%-7.7%; chatbot: 45.1%, 95% CI, 38.5%-51.8%; physicians: 4.6%, 95% CI, 2.1%-7.7%). This amounted to 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.

Conclusions  

In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.