This image shows parents, kids and illustrates different myths associated with vaccines.
ChatGPT provided correct answers to queries that arose from genuine vaccine myths, and to those considered in clinical recommendation guidelines to be false or true contraindications. Credit: Neuroscience News

ChatGPT Tackles Vaccine Myths

Summary: ChatGPT has been tested for its ability to debunk myths surrounding the safety of COVID-19 vaccines.

In a study, the AI chatbot was posed the top 50 most frequently asked vaccine questions, scoring an average of 9 out of 10 for accuracy. Researchers from the GenPoB research group emphasized that while ChatGPT is not a replacement for expert advice, it offers a reliable source of information for the general public.

However, there were concerns about the chatbot’s ability to change its responses or be manipulated in some contexts.

Key Facts:

  1. ChatGPT answered top vaccine-related questions with an average accuracy score of 9 out of 10.
  2. The study was designed to challenge ChatGPT using questions most frequently received by a WHO collaborating center focused on vaccine safety.
  3. Researchers caution about the potential to manipulate ChatGPT’s responses, but it largely offers accurate information on vaccines.

Source: Taylor and Francis Group

ChatGPT could help to increase vaccine uptake by debunking myths around jab safety, say the authors of a study published in the journal Human Vaccines & Immunotherapeutics.

The researchers asked the artificial intelligence (AI) chatbot the top 50 most frequently asked COVID-19 vaccine questions. They included queries based on myths and fake stories such as the vaccine causing long COVID.

Results show that ChatGPT scored 9 out of 10 on average for accuracy. The rest of the time it was correct but left some gaps in the information provided, according to the study.

Based on these findings, experts who led the study from the GenPoB research group based at the Instituto de Investigación Sanitaria (IDIS)—Hospital Clinico Universitario of Santiago de Compostela, say the AI tool is a “reliable source of non-technical information to the public,” especially for people without specialist scientific knowledge.

However, the findings do highlight some concerns about the technology such as ChatGPT changing its answers in certain situations.

“Overall, ChatGPT constructs a narrative in line with the available scientific evidence, debunking myths circulating on social media,” says lead author Antonio Salas, who as well as leading the GenPoB research group, is also a Professor at the Faculty of Medicine at the University of Santiago de Compostela, in Spain.

“Thereby it potentially facilitates an increase in vaccine uptake. ChatGPT can detect counterfeit questions related to vaccines and vaccination. The language this AI uses is not too technical and therefore easily understandable to the public but without losing scientific rigor.

“We acknowledge that the present-day version of ChatGPT cannot substitute an expert or scientific evidence. But the results suggest it could be a reliable source of information to the public.”

In 2019, the World Health Organization (WHO) listed vaccine hesitancy among the top 10 threats to global health.

During the pandemic, misinformation spread via social media contributed to public mistrust of COVID-19 vaccination.

The authors of this study include those from the Hospital Clinico Universitario de Santiago which the WHO designated as a vaccine safety collaborating center in 2018.

Researchers at the center have been exploring myths around vaccine safety and medical situations that are falsely believed to be a reason not to get vaccinated. These misplaced concerns contribute to vaccine hesitancy.

The study authors set out to test ChatGPT’s ability to get the facts right and share accurate information around COVID vaccine safety in line with current scientific evidence.

ChatGPT enables people to have human-like conversations and interactions with a virtual assistant. The technology is very user-friendly which makes it accessible to a wide population.

However, many governments are concerned about the potential for ChatGPT to be used fraudulently in educational settings such as universities.

The study was designed to challenge the chatbot by asking it the questions most frequently received by the WHO collaborating center in Santiago.

The queries covered three themes. The first was misconceptions around safety such as the vaccine causing long COVID. Next was false contraindications—medical situations where the jab is safe to use such as in breastfeeding women.

The questions also related to true contraindications—a health condition where the vaccine should not be used—and cases where doctors must take precautions, for example, a patient with heart muscle inflammation.

Next, experts analyzed the responses then rated them for veracity and precision against current scientific evidence, and recommendations from WHO and other international agencies.

The authors say this was important because algorithms created by social media and internet search engines are often based on an individual’s usual preferences. This may lead to “biased or wrong answers,” they add.

Results showed that most of the questions were answered correctly with an average score of nine out of 10 which is defined as “excellent” or “good.” The responses to the three question themes were on average 85.5% accurate or 14.5% accurate but with gaps in the information provided by ChatGPT.

ChatGPT provided correct answers to queries that arose from genuine vaccine myths, and to those considered in clinical recommendation guidelines to be false or true contraindications.

However, the research team does highlight ChatGPT’s downsides in providing vaccine information.

Professor Salas, who specializes in human genetics, concludes, “Chat GPT provides different answers if the question is repeated ‘with a few seconds of delay.’

“Another concern we have seen is that this AI tool, in its present version, could also be trained to provide answers not in line with scientific evidence.

“One can ‘torture’ the system in such a way that it will provide the desired answer. This is also true for other contexts different to vaccines. For instance, it might be possible to make the chatbot align with absurd narratives like the flat-earth theory, deny climate change, or object to the theory of evolution, just to give a few examples.

“However, it’s important to note that these responses are not the default behavior of ChatGPT. Thus, the results we have obtained regarding vaccine safety can be probably extrapolated to many other myths and pseudoscience.”

About this AI and vaccine myth research news

Author: Press Office
Source: Taylor and Francis Group
Contact: Press Office – Taylor and Francis Group
Image: The image is credited to Neuroscience News

Original Research: Open access.
Chatting with ChatGPT to learn about safety of COVID-19 vaccines—a perspective“by Antonio Salas et al. Human Vaccines & Immunotherapeutics


Abstract

Chatting with ChatGPT to learn about safety of COVID-19 vaccines—a perspective

In 2019, the World Health Organization (WHO) signaled vaccine hesitancy as one of the top 10 threats to global health because it “threatens to reverse progress made in tackling vaccine-preventable disease.” 

The circulation of misinformation on (social) media has significantly contributed to generating unfavorable reactions among the population regarding COVID-19 vaccination and other pandemic control measures linked to social and public health. Acceptance of vaccines is facing new challenges.

 European populations were recognized as being among the least vaccine confident in the world in 2016.

ChatGPT is an artificial intelligence (AI) chatbot technology released by OpenAI. It utilizes natural language processing and machine learning to enable users to engage in conversations and interactions with a virtual assistant. The chatbot generates immediate responses to written prompts.

However, concerns have been raised in Editorials published in high-ranked journals regarding its potential misuse within the academic and scientific communities. Consequencly, strict policies are being implemented to regulate its use.

Its user-friendly interface makes it accessible to a wide population this fact has been profusely echoed in the media, and many governments are expressing worries about its potential to be fraudulently used in educational settings (https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html).

The WHO Collaborating Center for Vaccine Safety at the University of Santiago de Compostela (WHO-CC VSS; Spain) has been addressing myths and false contraindications to vaccination.

This misconceptions, often found in clinical practice and profusely disseminated in social media, contribute significantly to vaccine hesitancy and reluctancy among populations. Several actions have been taken from this center through the development of specific educational platforms (www.covid19infovaccines.com) and materials (https://apps.who.int/iris/handle/10665/350968) to counteract these issues.

 In light of this context, we took the opportunity to assess the accuracy of ChatGPT regarding the safety aspects of COVID-19 vaccines.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.