Should Facebook Buy Snopes?

How the site could get serious about fact-checking

Donald Trump speaks at a Facebook-sponsored primary debate in Cleveland.
Scott Olson / Getty

A lot of what’s on Facebook isn’t true. Maybe that’s not surprising, because a lot of what’s on the internet at large isn’t true, and Facebook is a place for people to share what they’re thinking, or seeing, or reading.

That’s certainly how Facebook seems to conceive of its role as a distributor of information. “We are a tech company, not a media company,” Mark Zuckerberg, Facebook’s CEO, said this summer. Chris Cox, the company’s product manager, elaborated a few weeks ago: “A media company is about the stories it tells. A technology company about the tools it builds.”

But in its attempt to be an apolitical platform for news, Facebook also helps circulate misinformation, much of it politically tinged. In The New York Times Magazine, John Hosenball explored how Facebook “centralized online-news consumption in an unprecedented way,” giving rise to news sources so far outside the mainstream that their interest in actual facts is subordinate to their thirst for virality.

These news sources’ stories have sidled into everyone’s personal Facebook feeds, sneaking in between baby pics and status updates. On one side: “Hillary Caught Wearing Earpiece Again!” (She never wore an earpiece.) On the other: “The Photos of Donald Trump with his Daughter That the Campaign Doesn’t Want You to See!” (Most were Photoshopped.)

In May, Gizmodo reported that Facebook had assigned a team of human “editors” to curate its Trending section, the list of topics in the upper-right corner of the desktop site. The editors were charged with making sure the items that appeared were factual, and linking them to a news story from a reputable source.

But the human element got Facebook in trouble. In another Gizmodo story, a former Facebook editor said he was instructed to suppress news about conservative topics. At first, Zuckerberg denied the practice—“We have rigorous guidelines that do not permit the prioritization of one viewpoint over another or the suppression of political perspectives,” he wrote in a post in May—but within months, the humans were fired and algorithms took over.

That immediately went poorly. Just days after the switch, the Trending section prominently displayed a fake news story declaring that Megyn Kelly had been fired from her job as a Fox News host for being a “traitor” and supporting Hillary Clinton for president. By 9:30 a.m., it had been removed from the Trending widget, but not before it spent hours there, likely seen by millions.

The fake article’s appearance in the Trending section was particularly problematic: Stories posted there come with an implied stamp of approval from Facebook, which might make users more likely to trust them. The fact that Facebook took it down makes it clear the company didn’t want to validate the misinformation.

But what about stories that appear in the newsfeed that are so clearly false a quick Google search disproves them? Should Facebook be filtering those?

The company’s already using machine learning—different algorithms than the ones that drive the Trending section—to try and catch misinformation on the platform, a Facebook spokesman told me. If a post containing a link to a news story gets a lot of pushback in the comments—links to posts debunking it on Snopes or PolitiFact, two popular fact-checking sites, for example—an algorithm will infer that the original news story is probably fake. Once the link is flagged internally, it’s less likely to crop up on users’ news feeds as they scroll through—no matter who posts it. That means anybody else who shared the same link will have their posts suppressed, too. (The algorithm only works on links, the spokesperson said, not text-only posts.)

And there might be more to come in the fact-checking field. Adam Mosseri, Facebook’s vice president in charge of the news feed, shared a statement with TechCrunch that hinted at future plans:

Despite these efforts we understand there’s so much more we need to do, and that is why it’s important that we keep improving our ability to detect misinformation. We’re committed to continuing to work on this issue and improve the experiences on our platform.

What Facebook chooses to do will ultimately be informed by how the company sees its role on the internet. If it considers itself a mirror that reflects the rest of the net, unfiltered and unvarnished, then it probably won’t step in to play a stronger moderating role. But if it fancies itself a safe space for sharing opinions and ideas, in addition to the daily humdrum of life, it might need to be more of an arbiter.

The company’s recently taken strides toward the “safe space” model, building safety tools like a hub for cyberbullying prevention resources and a system that makes it easy for Facebook users to report friends’ posts that seem to indicate thoughts of self-harm or suicide. (At an event in New York City Thursday night, Zuckerberg said, “When we started, the north star for us was: We’re building a safe community,” reported Will Oremus, a technology writer at Slate.)

It would still be quite a leap to move from a focus on emotional and physical safety to refereeing the facts in a debate—and, on top of that, it would be really hard to do. The sheer volume of links posted to Facebook every day would make it incredibly difficult for even a large team of human fact-checkers to take on. Yet the current, algorithmic system doesn’t seem to be doing its job, either, judging by the volume of posts that have slipped through the cracks.

Perhaps the algorithms that currently find disputed posts could flag them for human review by a Snopes-like fact-checking team—heck, it could even buy Snopes. Once they’d researched the facts in the story and determined its accuracy, they could overlay a badge like the ones that already populate the Snopes site—an X in a red circle for “false”; a check mark in a green circle for “true”—on one corner of the link preview in the post.

A perfect fact-checking system probably wouldn’t have changed the outcome of this week’s election; Trump’s surprise win wasn’t Facebook’s doing alone, of course. Zuckerberg said Thursday that fake news “surely had no impact” on the election, Oremus reported.

But one of the central criticisms of Trump’s campaign was his disinterest in facts, and the willingness of his supporters to join him in ignoring them. Despite an incredible spread of fact-checking resources available online, Facebook made it easy to see the same mistruths that were shouted from the podium emblazoned in headlines shared over and over, by friends and family.

If Facebook’s users don’t call out mistruths online, and Facebook itself isn’t willing to, who will?