Key Questions Answered
Q: What did this study find about hate speech and psychiatric disorders?
A: Posts in online hate speech communities show speech-pattern similarities to posts in communities for personality disorders like borderline, narcissistic, and antisocial personality disorder.
Q: Does this mean people with psychiatric disorders are more hateful?
A: No. The researchers emphasize that they cannot know if users had actual diagnoses—only that the language patterns were similar, possibly due to shared traits like low empathy or emotional dysregulation.
Q: Why does this matter for online safety and mental health?
A: Understanding that hate speech mirrors certain psychological speech styles could help develop therapeutic or community-based strategies to combat toxic online behavior.
Summary: A new study using AI tools found that posts in online hate speech communities closely resemble the speech patterns seen in forums for certain personality disorders. While it doesn’t imply that people with psychiatric diagnoses are more prone to hate, the overlap suggests that online hate speech may cultivate traits like low empathy and emotional instability.
Posts from communities for personality disorders had the most linguistic similarity to hate speech groups. These findings may inform future interventions by adapting therapeutic strategies typically used for managing such disorders.
Key Facts:
- Speech Overlap: Hate speech communities shared linguistic traits with Cluster B personality disorder communities.
- No Diagnostic Link: The study does not claim individuals with mental illness are more hateful—only that language patterns are similar.
- Therapeutic Potential: Insights could guide new strategies for countering hate speech using mental health approaches.
Source: PLOS
A new analysis suggests that posts in hate speech communities on the social media website Reddit share speech-pattern similarities with posts in Reddit communities for certain psychiatric disorders. Dr. Andrew William Alexander and Dr. Hongbin Wang of Texas A&M University, U.S., present these findings July 29th in the open-access journal PLOS Digital Health.
The ubiquity of social media has raised concerns about its role in spreading hate speech and misinformation, potentially contributing to prejudice, discrimination and real-world violence.

Prior research has uncovered associations between certain personality traits and the act of posting online hate speech or misinformation.
However, whether any associations exist between psychological wellbeing and online hate speech or misinformation has been unclear. To help clarify, Alexander and Wang used artificial intelligence tools to analyze posts from 54 Reddit communities relevant to hate speech, misinformation, psychiatric disorders, or, for neutral comparison, none of those categories.
Selected groups included r/ADHD, a community for discussing attention-deficit/hyperactivity disorder, r/NoNewNormal, dedicated to COVID-19 misinformation, and r/Incels, a community banned for hate speech.
The researchers used the large-language model GPT3 to convert thousands of posts from these communities into numerical representations capturing the posts’ underlying speech patterns.
These representations, or “embeddings,” could then be analyzed through machine-learning techniques and a mathematical approach known as topological data analysis.
This analysis showed that speech patterns in hate speech communities were similar to speech patterns in communities for complex post-traumatic stress disorder, and borderline, narcissistic and antisocial personality disorders. Links between misinformation and psychiatric disorders were less clear, but with some connections to anxiety disorders.
Importantly, these findings do not at all suggest that people with psychiatric disorders are more prone to hate speech or misinformation. For one, there was no way of knowing if the analyzed posts were made by people actually diagnosed with disorders.
More research is needed to understand the links and explore such possibilities as hate speech communities mimicking speech patterns seen in psychiatric disorders.
The authors suggest their findings could help inform new strategies to combat online hate speech and misinformation, such as treating them using elements of therapy developed for psychiatric disorders.
The authors add, “Our results show that the speech patterns of those participating in hate speech online have strong underlying similarities with those participating in communities for individuals with certain psychiatric disorders.
“Chief among these are the Cluster B personality disorders: Narcissistic Personality Disorder, Antisocial Personality Disorder, and Borderline Personality Disorder. These disorders are generally known for either lack of empathy/regard towards the wellbeing of others, or difficulties managing anger and relationships with others.”
Alexander notes, “While we looked for similarities between misinformation and psychiatric disorder speech patterns as well, the connections we found were far weaker. Besides a potential anxiety component, I think it is safe to say at this point in time that most people buying into or spreading misinformation are actually quite healthy from a psychiatric standpoint.”
Alexander concludes, “I want to emphasize that these results do not mean that individuals with psychiatric conditions are more likely to engage in hate speech. Instead, it suggests that people who engage in hate speech online tend to have similar speech patterns to those with cluster B personality disorders.
“It could be that the lack of empathy for others fostered by hate speech influences people over time and causes them to exhibit traits similar to those seen in Cluster B personality disorders, at least with regards to the target of their hate speech.
“While further studies would be needed to confirm this, I think it is a good indicator that exposing ourselves to these types of communities for long periods of time is not healthy and can make us less empathetic towards others.”
Funding: AWA was a Burroughs Wellcome Fund Scholar supported by a Burroughs Wellcome Fund Physician Scientist Institutional Award (G-1020069) to the Texas A&M University Academy of Physician Scientists (https://www.bwfund.org/funding-opportunities/biomedical-sciences/physician-scientist-institutional-award/grant-recipients/).
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. HW received no specific funding for this work.
About this AI, mental health, and neuroscience research news
Author: Claire Turner
Source: PLOS
Contact: Claire Turner – PLOS
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Topological data mapping of online hate speech, misinformation, and general mental health: A large language model based study” by Andrew Alexander et al. PLOS Digital Health
Abstract
Topological data mapping of online hate speech, misinformation, and general mental health: A large language model based study
The advent of social media has led to an increased concern over its potential to propagate hate speech and misinformation, which, in addition to contributing to prejudice and discrimination, has been suspected of playing a role in increasing social violence and crimes in the United States.
While literature has shown the existence of an association between posting hate speech and misinformation online and certain personality traits of posters, the general relationship and relevance of online hate speech/misinformation in the context of overall psychological wellbeing of posters remain elusive.
One difficulty lies in finding data analytics tools capable of adequately analyzing the massive amount of social media posts to uncover the underlying hidden links.
Machine learning and large language models such as ChatGPT make such an analysis possible. In this study, we collected thousands of posts from carefully selected communities on the social media site Reddit.
We then utilized OpenAI’s GPT3 to derive embeddings of these posts, which are high-dimensional real-numbered vectors that presumably represent the hidden semantics of posts.
We then performed various machine-learning classifications based on these embeddings in order to identify potential similarities between hate speech/misinformation speech patterns and those of various communities.
Finally, a topological data analysis (TDA) was applied to the embeddings to obtain a visual map connecting online hate speech, misinformation, various psychiatric disorders, and general mental health.