Facebook today is, once again, theoretically ramping up enforcement against hate speech, this time with a new policy prohibiting Holocaust denial on the platform.
The change is due to a “well-documented rise in anti-Semitism globally,” Facebook executive Monika Bickert wrote in a corporate blog post today.
The policy is a complete 180 for Facebook CEO Mark Zuckerberg, who in a 2018 interview specifically described Holocaust denial as the kind of “deeply offensive” speech he nonetheless felt should be permitted on the platform. The next day, amid blowback, he “clarified” his position:
Our goal with fake news is not to prevent anyone from saying something untrue—but to stop fake news and misinformation spreading across our services. If something is spreading and is rated false by fact checkers, it would lose the vast majority of its distribution in News Feed. And of course if a post crossed line into advocating for violence or hate against a particular group, it would be removed. These issues are very challenging but I believe that often the best way to fight offensive bad speech is with good speech.
Zuckerberg said in a Facebook post today that his own thinking “has evolved” amid the growth in anti-Semitic violence in recent years. “Drawing the right lines between what is and isn’t acceptable speech isn’t straightforward,” he added, “but with the current state of the world, I believe this is the right balance.”
The ban on Holocaust denial is just the latest in a huge suite of policy changes and proposals Facebook has made in the past two weeks explicitly related to hate speech, misinformation, or “influence operations.”
After all this time, why now?
Previous much-publicized efforts by Facebook to reduce hate speech and misinformation on the platform have not gone particularly well overall, and the world is still dealing with the effects of how quickly and widely misinformation can spread thanks to social media. A new study released today finds that the problem is getting rapidly worse, not better.
The digital project arm of the German Marshall Fund, a nonpartisan think tank, published a report today finding that Facebook has not only failed to limit the spread of false claims on its platform but instead has allowed disinformation to more than double since 2016.
The study ranks the number of interactions that come from what the GMF calls “deceptive sites,” which fall into two broad categories. The first category includes sites that “repeatedly published content that is provably false” and is conveniently called “false content producers.” The second, larger group of sites, called “manipulators,” technically doesn’t usually run wholly untrue stories but instead “egregiously distort[s] or misrepresent[s] information to make an argument.”
Facebook engagement with both kinds of deceptive sites has increased 242 percent since this time in 2016, GMF found, with the vast majority of that growth happening in the past year, since the third quarter of 2019. Interactions with outright false content have just about doubled, but interactions with “manipulator” sites have increased by close to 300 percent.
The study includes more than 720 sites under the “deceptive” umbrella, but GMF found that the top 10 sites alone account for a whopping 62 percent of all the interactions they tracked, with the remaining 711 sites all together accounting for the remaining 38 percent. All of the top 10 qualified as “manipulator” sites, including Breitbart and The Daily Wire. Although the top sites all skew conservative, GMF noted, the study includes both left-leaning and non-political deceptive sites as well.
The perennially popular Fox News, which garnered the most interactions of any of the sites included in the study, also qualified under the “manipulator” label. The research team finds that Fox has published “irresponsible and misleading claims,” particularly related to COVID-19. At the same time, Fox rated more highly than other outlets in the “manipulator” category because it follows journalistic practices such as correcting errors, avoiding deceptive headlines, labeling advertising, and disclosing ownership.
The misinformation spread by deceptive sites is a key part of what GMF calls the “disinformation supply chain,” which, as we learned in 2016, can have major real-world effects. Such articles are designed to push emotional hot buttons, and Facebook’s algorithms then amplify that content to more users, and the cycle goes on.
“Disinformation is infecting our democratic discourse at rates that threaten the long-term health of our democracy,” said Karen Kornbluh, the director of GMF Digital and head of the project. “A handful of sites masquerading as news outlets are spreading even more outright false and manipulative information than in the period around the 2016 election. This data underscores that de-amplifying—or adding friction to the spread of—content from a handful of the most dangerous sites could dramatically decrease disinformation online.”