Extreme Political Views Drive Higher Belief in Misinformation

Summary: A new study reveals that users with extreme political views are more likely to encounter and believe online misinformation. The research shows that misinformation spreads across the political spectrum, but its impact is most pronounced among those with conservative or liberal extremes.

These individuals tend to see false news early in its circulation, making timely interventions crucial. The findings suggest that efforts to curb misinformation should focus on users most vulnerable to it and implement interventions quickly to be effective.

Key Facts:

  • Politically extreme users are more likely to believe and spread false news.
  • Misinformation reaches these users early, making quick interventions crucial.
  • Targeted interventions reduce misinformation more effectively than broad approaches.

Source: NYU

Political observers have been troubled by the rise of online misinformation—a concern that has grown as we approach Election Day. However, while the spread of fake news may pose threats, a new study finds that its influence is not universal. Rather, users with extreme political views are more likely than are others to both encounter and believe false news. 

“Misinformation is a serious issue on social media, but its impact is not uniform,” says Christopher K. Tokita, the lead author of the study, conducted by New York University’s Center for Social Media and Politics (CSMaP).

This shows people looking at screens with True and False written on them.
One takeaway from these simulations was that the earlier interventions were applied, the more likely they were to be effective. Credit: Neuroscience News

The findings, which appear in the journal PNAS Nexus, also indicate that current methods to combat the spread of misinformation are likely not viable—and that the most effective way to address it is to implement interventions quickly and to target them toward users most likely to be vulnerable to these falsehoods.

“Because these extreme users also tend to see misinformation early on, current social media interventions often struggle to curb its impact—they are typically too slow to prevent exposure among those most receptive to it,” adds Zeve Sanderson, executive director of CSMaP. 

Existing methods used to assess the exposure to and impact of online misinformation rely on measuring views or shares. However, these fail to fully capture the true impact of misinformation, which depends not just on spread, but also on whether users actually believe the false information.

To address this shortcoming, Tokita, Sanderson, and their colleagues developed a novel approach using Twitter (now “X”) data to estimate not just how many users were exposed to a specific news story, but also how many were likely to believe it. 

“What is particularly innovative about our approach in this research is that the method combines social media data tracking the spread of both true news and misinformation on Twitter with surveys that assessed whether Americans believed the content of these articles,” explains Joshua A. Tucker, a co-director of CSMaP and an NYU professor of politics, one of the paper’s authors.

“This allows us to track both the susceptibility to believing false information and the spread of that information across the same articles in the same study.”

The researchers captured 139 news articles (November 2019-February 2020)—102 of which were rated as true and 37 of which were rated as false or misleading by professional fact-checkers—and calculated the spread of those articles across Twitter from the time of their initial publication. 

This sample of popular articles was drawn from five types of news streams: mainstream left-leaning publications, mainstream right-leaning publications, low-quality left-leaning publications, low-quality right-leaning publications, and low-quality publications without an apparent ideological lean.

To establish the veracity of the articles, each article was sent to a team of professional fact checkers within 48 hours of publication. The fact-checkers rated each article as “true” or “false/misleading.” 

To estimate exposure to and belief in these articles, the researchers combined two types of data. First, they used Twitter data to identify which users on Twitter were potentially exposed to each of the articles; they also estimated each potentially exposed user’s ideological placement on a liberal-conservative scale by using an established method that infers a user’s ideology from the prominent news and political accounts they follow. 

Second, to determine the likelihood that these exposed users would believe an article to be true, they deployed real-time surveys as each article spread online. These surveys asked Americans who are habitual internet users to classify the article as true or false and to provide demographic information, including their ideology.

From this survey data, the authors calculated the proportion of individuals within each ideological category that believed the article to be true. With these estimates for each article, they could calculate the number of Twitter users exposed and receptive to believing the article to be true. 

Overall, the findings showed that while false news reached users across the political spectrum, those with more extreme ideologies (both conservative and liberal) were far more likely to both see and believe it. Crucially, these users, who are receptive to misinformation, tend to encounter it early in its spread through Twitter.  

The research design also allowed the study’s authors to simulate the impact of different types of interventions designed to stop the spread of misinformation. One takeaway from these simulations was that the earlier interventions were applied, the more likely they were to be effective.

Another was that “visibility” interventions—whereby a platform makes flagged misinformation posts less likely to appear in users’ feeds—appeared more likely to reduce the reach of misinformation to susceptible users than did interventions aimed at making users less likely to share misinformation.

“Our research indicates that understanding who is likely to be receptive to misinformation, not just who is exposed to it, is key to developing better strategies to fight misinformation online,” advises Tokita, now a data scientist in the tech industry.

The study’s other authors included Kevin Aslett, a CSMaP postdoctoral researcher and University of Central Florida professor at the time of the study who now works as a researcher in the tech industry, William P. Godel, an NYU doctoral student at the time of the study and now a researcher in the tech industry, as well as CSMaP researchers Jonathan Nagler and Richard Bonneau.

Funding: The research was supported by a graduate research fellowship from the National Science Foundation (DGE1656466).

About this psychology research news

Author: James Devitt
Source: NYU
Contact: James Devitt – NYU
Image: The image is credited to Neuroscience News

Original Research: Closed access.
Measuring receptivity to misinformation at scale on a social media platform” by Christopher K. Tokita et al. PNAS Nexus


Abstract

Measuring receptivity to misinformation at scale on a social media platform

Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it.

To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story.

As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies.

These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it.

This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions.

To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms.

We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread.

Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.