Google and MIT prove social media can slow the spread of fake news

During the COVID-19 pandemic, the public has been battling a whole other threat: what U.N. Secretary-General António Guterres has called a “pandemic of misinformation.” Misleading propaganda and other fake news is easily shareable on social networks, which is threatening public health. As many as one in four adults has claimed they will not get the vaccine. And so while we finally have enough doses to reach herd immunity in the United States, too many people are worried about the vaccines (or skeptical that COVID-19 is even a dangerous disease) to reach that threshold.

However, a new study out of the Massachusetts Institute of Technology and Google’s social technology incubator Jigsaw holds some hope to fixing misinformation on social networks. In a massive study involving 9,070 American participants—controlling for gender, race, and partisanship—researchers found that a few simple UI interventions can stop people from sharing fake news around COVID-19.

How? Not through “literacy” that teaches them the difference between reliable sources and lousy ones. And not through content that’s been “flagged” as false by fact checkers, as Facebook has attempted.

Instead researchers introduced several different prompts through a simple popup window, all with a single goal: to get people to think about the accuracy of what they’re about to share. When primed to consider a story’s accuracy, people were up to 20% less likely to share a piece of fake news. “It’s not that we’ve come up with an intervention you give people once, and they’re set,” says MIT professor David Rand, who was also lead author of the study. “Instead, the point is that the platforms are, by design, constantly distracting people from accuracy.”


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s