It can be hard to know what to trust these days. With the rise of fake news and misinformation on social media, you could be forgiven for feeling unsure about content, especially when it is popular. But a new experimental study has shown that the addition of a “trust” and “distrust” button, to accompany the existing “like” button, could be an important step in combating inaccurate posts.
The issue here is about incentivizing accuracy while discouraging misinformation. Researchers from UCL have found that incentivizing accuracy can cut the reach of false information by half.
“Over the past few years, the spread of misinformation, or ‘fake news’, has skyrocketed, contributing to the polarisation of the political sphere and affecting people’s beliefs on anything from vaccine safety to climate change to tolerance of diversity,” Professor Tali Sharot said in a statement. “Existing ways to combat this, such as flagging inaccurate posts, have had limited impact.”
Part of the challenge is that users are often rewarded for sharing fake information by receiving “shares” and “likes”, whereas true content may be less popular.
“Here, we have designed a simple way to incentivise trustworthiness, which we found led to a large reduction in the amount of misinformation being shared.”
In a previous study, Professor Sharot and colleagues found that people are more likely to share information they had seen already, suggesting that the repetition of misinformation was regarded as a sign of its accuracy. So, in this latest study, the team sought to test potential ways to combat this.
They examined a simulated social media platform used by 951 participants across six experiments. The platform operated like most regular social media platforms. It allowed users to share news articles, only half of which were accurate, which other users could respond with the usual “like” or “dislike” reactions, as well as the option to repost the content. But in some versions of the experiment, users could also react with “trust” and “distrust” buttons.
The results showed that the incentive structure worked well, as people relied more heavily on the trust/distrust buttons than they did on regular like/dislike ones. Essentially, the inclusion of these new reaction buttons – what the authors referred to as social “carrots” and “sticks” – turned trustworthiness and validity into socially desirable actions. Additional analysis using computational modeling showed that the introduction of the trust/distrust buttons also led participants to be more discerning when it came to choosing what to repost.
Interestingly, the researchers also found that participants who used the version of platform with the new trust/distrust buttons ended up with more accurate beliefs as well.
“Buttons indicating the trustworthiness of information could easily be incorporated into existing social media platforms, and our findings suggest they could be worthwhile to reduce the spread of misinformation without reducing user engagement,” co-lead author and PhD student Laura Globig added.
“While it’s difficult to predict how this would play out in the real world with a wider range of influences, given the grave risks of online misinformation, this could be a valuable addition to ongoing efforts to combat misinformation.”
The study is published in eLife.
Source Link: Could Adding "Trust" And "Distrust" Buttons Be The Future Of Social Media?