Why Fake News Spreads (and How to Stop it)
Do you remember Edgar M. Welch? On December 4, 2016, Mr. Welch drove 350 miles from his home in North Carolina to Washington DC, where he entered the Comet Ping Pong pizza restaurant and began firing an AR-15 assault weapon. Mr. Welch had come to the restaurant to liberate children that Hilary Clinton and others were allegedly keeping in the basement as sex slaves.
The fact that Comet Ping Pong doesn't have a basement is arguably the least important falsehood that led Mr. Welch on his misguided mission.
At his sentencing, Mr. Welch acknowledged that he'd behaved "foolishly and recklessly", but he never spurned the fake news that motivated him. (He's out of prison now and and keeping a low profile.) Meanwhile, QAnon, Alex Jones, and others continued to spread variants of the utterly baseless "Pizzagate Conspiracy", and it's still reportedly being floated at Trump rallies.
Pizzagate illustrates one of the dangers of fake news: It creates false beliefs that lead to harmful actions. Inaction can be a consequence too. Deceptive claims about election fraud in 2020 have led to reduced civic engagement ("Why vote if the elections are rigged?") and diminished confidence in our political system. Broadly speaking, fake news is having a corrosive effect on democracy.
Unfortunately, fake news has a self-perpetuating quality: It spreads through social media more rapidly than legitimate news does, crowding out accurate information, and, through repeated exposure, taking on an air of credibility. But there are grounds for hope. The purpose of this newsletter is to describe a new study, published last week, which offers more insight into why fake news spreads – and what we can do to prevent it. Although fake news is surely as old as news itself, statistics provides unique tools for understanding both the causes and the remedies.
Following is some context for the study I'll be discussing.
Fake news, misinformation, and disinformation
I'm happy to tell you that the clearest, most succinct definition of fake news I've ever seen can be found on Wikipedia:
Fake news is false or misleading information presented as news.
(At the end, I'll explain why that definition makes me happy.)
Fake news spreads in part because a small number of people are highly motivated to put it out there. For instance, just 12 people created 65% of the anti-vaccine disinformation shared or posted on Facebook and Twitter in early 2021.
Unlike the "disinformation dozen", some sources of inaccuracy are well-intentioned. For example, I consider NPR among the most reliable, insightful news sources, and yet here's how the aforementioned study was summarized on "All Things Considered" last May:
"Researchers have found just 12 people are responsible for the bulk of the misleading claims and outright lies about COVID-19 vaccines that proliferate on Facebook, Instagram and Twitter."
In fact, the researchers did not analyze Instagram data. NPR's reference to Instagram is inaccurate.
This is a relatively trivial mistake, given the study's main findings. I only mention it to help illustrate the difference between misinformation and disinformation.
Someone who shares misinformation believes it to be accurate. They're not trying to deceive you, in other words. NPR's reference to Instagram is presumably a form of misinformation. (Why infer that? Because the rest of the summary is accurate, and because even if you were concerned about NPR's liberal biases, the organization has a reputation for accurate reporting. Anyway, there'd be no advantage to adding Instagram to the list of social media platforms; the story itself is already quite striking.)
A person who shares disinformation knows that it's false or distorted but shares it anyway. Although I wouldn't describe NPR's reference to Instagram as disinformative, I suppose I should acknowledge a tiny, one-in-a-million chance that the journalist has some secret bias against Instagram and intentionally slipped in a reference to the platform, in which case we'd be looking at a example of disinformation.
As you can see, whether content is misinformation or disinformation depends on the beliefs of the person who shares it. If they don't know it's false, it's misinformation. If they know bettter, it's disinformation. The 12 people I mentioned earlier are often called the "disinformation dozen", but that would be a misnomer if some of them truly believe the anti-vaccine falsehoods they've been spreading.
Fake news is routinely described as a form of disinformation. This is accurate, but I would describe it a little more specifically: Fake news, by definition, begins life as disinformation, but as it spreads from person to person, it might be misinformation in the hands of some but disinformation for others. Thus, when we think about how to stem the tide of fake news, we have to consider both those who misinform and those who disinform.
This brings me to the new study, published last week in the prominent journal Proceedings of the National Academy of Sciences (PNAS). In this study, Dr. Gizem Ceylan at Yale and two colleagues from USC explored the source of fake news and proposed some solutions.
(Ceylan and colleagues were interested in the broader category of misinformation, but since they focused on false news headlines, I'll use the term "fake news" to describe their methods and findings.)
Study hypotheses
Ceylan and colleagues contrasted three explanations for the spread of fake news.
1. Lack of awareness.
According to this explanation, people share fake news because they don't recognize it as fake. Either they're not paying enough attention, or they have poor critical thinking skills, or they're processing the information emotionally. Whatever the cause, the content they share can be called misinformation rather than disinformation, because they don't know it's false.
2. Partisan bias.
People share fake news that's consistent with their own beliefs. This "my-side bias" could be triggered by beliefs about politics, religion, health, or pretty much any other topic. Here, what's shared might be misinformation or disinformation, depending on whether people know it's false.
3. Platform-driven habits.
Ceylan and colleagues note that social media platforms are set up to promote habitual sharing. People share content because they're frequently reinforced by likes, comments, new followers, etc. Thus, when they encounter fake news, they may automatically share it without thinking about what they're doing, because they've acquired the habit of sharing all sorts of things.
Study methods
The study was conducted with Facebook users who viewed 16 news headlines and chose whether or not to share each one. 8 of the headlines were true, 8 were false. An example of a false headline is the coconut oil image at the beginning of this newsletter. (Coconut oil doesn't destroy viruses, nor is there a "history" of research on this topic.) Another false headline used in the study is shown below:
Ceylan and colleagues measured two key variables:
1. How many of the 16 true and false headlines did participants share?
2. How automatically (i.e., habitually) do participants share social media content in their daily lives? This was measured by asking each participant to rate how much they agree with statements like "I start sharing social media content before I realize I'm doing it".
The researchers conducted four separate experiments, each with a slightly different tweak to their basic methods.
Study findings
Experiment 1 (200 participants) set the stage for the rest of the study. This experiment showed that the more habitually a person shares on social media, the more of the 16 Facebook news headlines they shared. No surprise there. But greater sharing also meant less discernment: People who shared a lot in their everyday lives shared true headlines about as often as false ones. (People who less habitually share were more likely to share the 8 true headlines only.)
This experiment suggests that fake news is mostly spread by people who share all kinds of news. In other words, even if small groups of people like the "disinformation dozen" create a disproportional amount of fake news, what spreads that news isn't people who devote their lives to disseminating fake news, but rather people who share all sorts of things and simply aren't very discriminating about what they share.
In Experiment 2, the researchers wanted to know whether directing participants' attention to the accuracy of content might prompt them to share more carefully. In this experiment, half of the 838 participants were asked to judge the accuracy of the 16 headlines before deciding whether to share each one, while the other half were asked to judge the accuracy after deciding.
The results yielded some good news and some bad news. The good news is that overall, when people made accuracy judgments before deciding whether to share the 16 headlines, they shared the fake ones less frequently. The bad news is that the most habitual sharers weren't affected.
This is an important finding, in my view. It tells us that encouraging people to reflect on the accuracy of news content before sharing it can help reduce the volume of fake news. However, it won't help much, because a small number of people will continue to overshare everything, including the fake stuff.
Experiment 3 (836 participants) showed that although people were more likely to share headlines consistent with their own political views, people who habitually share were, once again, less discriminating. They shared headlines that were both consistent and inconsistent with their own views. Those who don't habitually share were more discerning, in the sense of sharing more headlines consistent with their personal views. This is an interesting finding, because it suggests that fake news is spread more by habit than by political bias. (Unfortunately, the experiment wasn't set up to probe specific motives for sharing. Perhaps over-sharers include content that's inconsistent with their beliefs simply because they want to ridicule that content.)
Experiment 4 was the researchers' final attempt to promote more careful sharing behavior. This time, before beginning the main task, participants completed a training session where they chose whether to share each of 80 headlines. Some participants were rewarded for sharing accurate headlines, some were rewarded for sharing inaccurate headlines, and some received no reward. The "reward" consisted of an annoucement that they had won "points". Regardless of how they responded, participants were told at the end, before beginning the main task, that the points they'd received qualified them to participate in a lottery for $20. In other words, they didn't receive a tangible reward but rather the prospect of one.
The good news from this experiment is that sharing behavior for the 16 headlines was affected by the "rewards" provided in the training session. Even the habitual sharers began to share more true headlines and fewer false ones if they'd been rewarded previously for sharing accurate headlines. Peoples' online sharing habits are malleable, in other words.
Although Experiment 4 provides some good news, we should keep in mind that people who had been rewarded for sharing inaccurate headlines subsequently shared more fake ones. Rewards will thus only be helpful if they're given for sharing credible information. In the real world, people who post fake news to websites that specialize in such content will be rewarded by approval from other users.
How can we diminish the volume of fake news?
Does this study provide any guidance on reducing the spread of fake news?
Here's what the results tells us: Fake news is mostly spread by people who share lot of social media content, because their sharing is habitual and not very reflective. Guiding peoples' attention to the accuracy of what they share reduces the volume of fake news slightly, but not by much, because those who share the most aren't affected. However, what does reduce the spread of fake news, even among those who share a lot, is rewards for appropriate sharing.
Of course, the "rewards" provided in this study aren't readily applicable to the real world. (Facebook/Meta isn't likely to run a lottery for people who ignore fake news.) Ceylan and colleagues instead suggest two changes to the way social media platforms reward users:
1. Revised algorithms.
Currently, what makes it to the top of a person's feed is content that's popular – e.g., liked the most. Ceylan and colleagues recognize that this is unlikely to change. However, they suggest algorithmic de-prioritization of unverified content. In other words, any content that's not independently verified by some neutral fact-checking organization would go to the bottom of the feed.
Although in theory this could be helpful, it doesn't seem feasible in practice. Human fact-checkers couldn't even begin to keep up with the amount of content shared, nor could AI catch all of the falsehoods and distortions. Even if an algorithm were simply told that which sources are vs. are not credible, it still wouldn't be able to identify all the unreliable sources because (a) there are too many of them, (b) new ones are constantly being created, and (c) in between credible sources and, say, Donald Trump's Truth Social platform, there's considerable gray area. (How can a source be identified as credible? What would an algorithm do with this newsletter, for instance? Every week it gets shared dozens or hundreds of times, according to Substack analytics, but am I fake news? The only way to answer that question would be to get a neutral, fact-checking organization like Snopes.com to decide, and that would require a human reader...)
2. Habitual posting disruptions.
Ceylan and colleagues also suggest that social media platforms include additional buttons, like "fact-check" or "skip" rather than only providing buttons for liking and sharing. They cite studies suggesting that doing so might disrupt the mindless, habitual process of sharing.
This seems like a good idea – and a feasible one, because it would require minimal changes to the social media interface. Having a "fact-check" icon right next to the thumbs-up and the heart sends an important message to users that fact checking is important, and it would give them immediate access to the resource. But would people make use of it? And, would AI be up to the task?
Throughout this newsletter I've referred to "fake news" as if there's a clear distinction between what is vs. is not fake. The distinction isn't always easy to make. A practical limitation to both of the researchers' suggestions is that ultimately humans would still be needed to sift through content and identify falsehoods. AI might be good at recognizing Pizzagate or Donald Trump's 2020 victory as fake news, but it struggles with subtler yet still important deceptions.
Some AI programs are quite good at recognizing fake news that's AI-generated – Grover, for example, claims 92% accuracy. Unfortunately, these programs are just as good at generating fake news as they are at recognizing it, meaning that there will continue to be a stalemate as improvements in their ability to detect the bad stuff are matched by improvements in their ability to create it. Meanwhile, a lot of fake news is generated by humans rather than machines.
Front-end vs. back-end strategies
Ceylan and colleagues recommended changing social media platforms in ways that would have real-time effects. I would call these front-end strategies. What's appealing about them is that they're systemic and relatively easily implemented – you make a small change to the social media platform, and every user is affected. They're "front-end" strategies in the sense that their impact is presumably strongest at that moment a user is deciding whether or not to share social media content.
At the same time, I have faith in the effectiveness of "back-end strategies", which begin to be implemented before the user engages with social media. Back-end strategies include persuading social media companies to work harder at identifying and weeding out fake news, so that users are exposed to less of it.
A very different sort of back-end strategy – and perhaps the most powerful one – can be found in K-12 educational settings, where students are taught critical thinking and media literacy skills that can, among other things, help them spot fake news and appreciate the importance of either ignoring it or actively debunking it.
Debunking efforts in K-12 settings and beyond can be supported by neutral fact-checking organizations (Snopes, FactCheck, PolitiFact, SciCheck, etc.) and by the inclusion of so-called "prebunking" activities.
Prebunking has gotten a lot of attention in recent years. The idea is that exposing people to a small amount of fake news by way of showing them how it works is a good way to inoculate them against its influence. A highly engaging example is the online game Bad News, where the purpose is to become a "fake news tycoon". As you learn techniques for sharing fake news and receive rewards for doing so, you also become better able to recognize it. A UK study published last year is one of several showing that after playing Bad News, people begin to judge fake news content as substantially less reliable.
The UK study also showed that the benefits of playing Bad News fade over time unless reinforced. This suggests that educational strategies like Bad News would benefit from support from the algorithmic-level changes described earlier. In other words, a combination of back-end support from educational activities designed to improve critical thinking and identify fake news, along with front-end support from social media interfaces that encourage fact-checking, might help stem the tide of fake news and prevent people from trying to liberate fictional captives from the basements of buildings that don't have basements.
Conclusion: What can we do?
Early on I praised the clarity and conciseness of Wikipedia's definition of fake news ("Fake news is false or misleading information presented as news").
This definition makes me happy, because it comes from a decentralized, open-source encyclopedia. It wasn't written by an expert necessarily but by people who cared about providing the best possible definition. Ultimately this is where we may find the strongest sources of resistance to fake news: People who are motivated to get it right. (See here for an interesting story about how crowdsourcing helped The Guardian fact-check nearly half a million documents.)
We may not be doing enough to support people who want to get it right. In many states, K-12 curricular standards call for the teaching of media literacy as early as elementary school, but actual instruction is limited and inconsistent, owing to lack of time and other priorities such as performance on state-mandated achievement tests. So perhaps what we need most – apart from conversations with young people and pressure on social media companies – is more conversations with school boards and administrators, and more support for teachers, so that they can prioritize media literacy and, specifically, the identification and appropriate treatment of fake news. (Media Literacy Now is a good place to start for anyone interested in educationally-focused advocacy.)
Thanks for reading! Feel free to test your comprehension by deciding whether the following headline (from the Onion) is fake news or not: