Site icon Medical Market Report

Just 10 “Superspreaders” Are Responsible For Over A Third Of Misinformation On Twitter

According to a new study, just 10 “superspreaders” were responsible for over a third of the misinformation posted on Twitter (now X) during an eight-month period in 2020.

Misinformation has become a serious concern in recent years. On the face of it, this type of content may appear to be simply irritating, but it can also have significant negative impacts. In particular, misinformation can sow distrust in democratic institutions or even threaten faith in public health systems. 

Advertisement

The January 6, 2021 attack on the US Capitol is an extreme example of how misinformation can lead to violent political unrest, while similar conspiratorial information seriously disrupted measures to address COVID-19 during the pandemic.

For some time, researchers have been aware that some individuals on social media are more apt at spreading misinformation than others. So-called “superspreaders” – users who consistently spread a disproportionately high amount of low-credibility content – may well be responsible for much of the problem. 

In fact, a study into the impact of the misinformation on the 2016 US election found that 0.1 percent of Twitter users were responsible for sharing 80 percent of the dodgy content that circulated at the time.

During the pandemic, an analysis of the proliferation of low-credibility information related to COVID-19 was shared by popular pages and accounts that had been verified by Facebook and Twitter, respectively. More worrying still, in 2021, the Centre for Countering Digital Hate identified 12 accounts – the so-called “Disinformation Dozen” – who were responsible for spreading almost two-thirds of the anti-vaccination claims on social media.

Advertisement

With the response to the rise in misinformation being shared, social media platforms are under increasing pressure to shore up their efforts to address it. But how do you identify superspreaders, especially when existing studies have used different methods to find them?

That’s the inspiration behind this latest study, conducted by researchers from the Observatory on Social Media at Indiana University, and the Department of Computer Science at the University of Exeter, in the UK. The team analyzed 2,397,388 tweets containing low-credibility content – which they defined as “content originally published by low-credibility, or untrustworthy sources” – that were sent by 448,103 users between January and October 2020.

According to their analysis, over a third of these posts came from just 10 accounts, while only 1,000 were responsible for 70 percent.

These superspreaders mostly belong to anonymous “hyperpartisan” accounts, as well as high-profile political commentators and strategists. Included among them were official accounts from both the Democratic and Republican parties, and the account of @DonaldJTrumpJr, belonging to former President Trump’s son and political advisor.

Advertisement

It is worth noting that many of these superspreader accounts were identified in 2020 and have now become inactive or have been banned from the platform. At the time, Twitter was experimenting with ways to combat misinformation, but this approach stands in stark contrast to the situation now. X decided to lay off much of its content moderation staff and has even disbanded its election integrity team.

It is also important to note that the majority of low credibility information in the data remained high even after 2,000 bot accounts were removed.

“In this paper we address two research questions at the core of the digital misinformation problem. Specifically, we compare the efficacy of several metrics in identifying superspreaders of low-credibility content on Twitter,” the authors wrote. “We then employ the best performing metrics to qualitatively describe these problematic accounts.”

“A manual classification of the active superspreaders we identify reveals that over half are heavily involved in political conversation. Although the vast majority are conservative, they include the official accounts of both the Democratic and Republican parties. Additionally, we find a substantial portion of nano-influencer accounts, prominent broadcast television show hosts, contrarian scientists, and anti-vaxxers.”

Advertisement

The team’s results show that the removal of superspreaders from the platform does result in a large reduction in the spread of low-credibility information. However, they note that the suspension of accounts to reduce harm may be interpreted as an effort to limit the freedom of speech.

“The effectiveness of other approaches to moderation should be evaluated by researchers and industry practitioners,” the team explain. “For instance, platforms could be redesigned to incentivize the sharing of trustworthy content.”

Although this work focuses on superspreaders of misinformation, it may well open the door for future research into “amplifier” accounts, users who may reshare misinformation originally published by others.

The study is published in PLOS One.

Source Link: Just 10 “Superspreaders” Are Responsible For Over A Third Of Misinformation On Twitter

Exit mobile version