New algorithm can distinguish cyberbullies from normal Twitter users with 90% accuracy

Summary: A new deep learning system is 90% accurate in identifying cyberbullies on the popular social media site Twitter.

Source: Binghamton University

A team of researchers, including faculty at Binghamton University, have developed machine learning algorithms which can successfully identify bullies and aggressors on Twitter with 90 percent accuracy.

Effective tools for detecting harmful actions on social media are scarce, as this type of behavior is often ambiguous in nature and/or exhibited via seemingly superficial comments and criticisms. Aiming to address this gap, a research team featuring Binghamton University computer scientist Jeremy Blackburn analyzed the behavioral patterns exhibited by abusive Twitter users and their differences from other Twitter users.

“We built crawlers — programs that collect data from Twitter via variety of mechanisms,” said Blackburn. “We gathered tweets of Twitter users, their profiles, as well as (social) network-related things, like who they follow and who follows them.”

This shows a woman's face and binary code
“In a nutshell, the algorithms ‘learn’ how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples,” said Blackburn. The image is in the public domain.

The researchers then performed natural language processing and sentiment analysis on the tweets themselves, as well as a variety of social network analyses on the connections between users. The researchers developed algorithms to automatically classify two specific types of offensive online behavior, i.e., cyberbullying and cyberaggression. The algorithms were able to identify abusive users on Twitter with 90 percent accuracy. These are users who engage in harassing behavior, e.g. those who send death threats or make racist remarks to users.

“In a nutshell, the algorithms ‘learn’ how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples,” said Blackburn.

While this research can help mitigate cyberbullying, it is only a first step, said Blackburn.

“One of the biggest issues with cyber safety problems is the damage being done is to humans, and is very difficult to ‘undo,'” Said Blackburn. “For example, our research indicates that machine learning can be used to automatically detect users that are cyberbullies, and thus could help Twitter and other social media platforms remove problematic users. However, such a system is ultimately reactive: it does not inherently prevent bullying actions, it just identifies them taking place at scale. And the unfortunate truth is that even if bullying accounts are deleted, even if all their previous attacks are deleted, the victims still saw and were potentially affected by them.”

Blackburn and his team are currently exploring pro-active mitigation techniques to deal with harassment campaigns.

About this neuroscience research article

Source:
Binghamton University
Media Contacts:
Robert Bock – Binghamton University
Image Source:
The image is in the public domain.

Original Research: Closed access
“Detecting Cyberbullying and Cyberaggression in Social Media”. Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, Athena Vakali, Nicolas Kourtellis.
Transactions on the Web doi:unknown.

Abstract

Detecting Cyberbullying and Cyberaggression in Social Media

Cyberbullying and cyberaggression are increasingly worrisome phenomena that affect people across all demographics. Already in 2014, more than half of young social media users worldwide experienced them in some form, being exposed to prolonged and/or coordinated digital harassment. Victims can experience a wide range of emotional consequences such as embarrassment, depression, isolation from other community members, which can lead to even more serious consequences such as suicide attempts. Nevertheless, tools and technologies to understand and mitigate it are scarce and mostly ineffective. In this paper, we take the first concrete steps to understand the characteristics of abusive behavior in Twitter, one of today’s largest social networks. We analyze 1.2 million users and 2 million tweets, comparing users participating in discussions around seemingly normal topics like the NBA, to those more likely to be hate-related, such as the Gamergate controversy or the gender pay inequality at the BBC. We also explore specific manifestations of abusive behavior, i.e., cyberbullying and cyberaggression, in one of the hate-related communities (Gamergate). We present a robust methodology to distinguish bullies and aggressors from regular users by considering text, user, and network based attributes. Using various state-of-the-art machine learning algorithms, we can classify these accounts with over 90% accuracy and AUC. Finally, we look at the current status of the Twitter accounts of users marked as abusive by our methodology and discuss the performance of the mechanisms used by Twitter to suspend users.

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.