This month, in my Scientific American column, I wrote about the rise of fake news stories: Bogus stories, posted on no-name Web sites, intended to generate ad income (and perhaps to influence the presidential election).
At first, Facebook CEO Mark Zuckerberg downplayed the significance of the fake-news tsunami.
But as the backlash continued, Facebook soon rolled out new tools to combat the spread of fake news. As a spokesman puts it, “We cannot become arbiters of truth ourselves.”
Instead, the new tools take four approaches, which Facebook intends to study and enhance over time.
First: If you tap the V button at the top right of a post and then choose “Report this post,” you’ll see a new option called “It’s a fake news story.” On the next screen, you’ll have a choice of options, including “Mark this post as fake news.” (Other options include “Message Chris Robin” [or whomever posted the story] to let them know they fell for it.)
Second: If enough people flag a story as fake, it will be sent to a fact-checking organization like Snopes.com or PolitiFact. And if the outfit determines that yes, the story is bogus, it will appear on Facebook with a red banner that says, “Disputed by Third-Party Fact Checkers.” That banner will include a link to the fact checkers’ article explaining why the story is false. (If you try to share it, you’ll see a similar message.)
In other words, Facebook is being careful to avoid outright censorship. The stories still appear, but with flags that identify them as phony. (Those stories may appear lower in your News Feed, and can’t be promoted or made into ads.)
Third: Facebook will employ software to help identify fake stories. For example, Facebook has learned that when lots of people read a certain article but then don’t share it, it’s often because the story is phony.
Fourth: Facebook is trying to shut down the financial incentive for fakers. Its engineers have eliminated the ability for the fakers to create Web sites that impersonate actual news sites, for example. And the company will analyze sites that draw ad dollars from Facebook traffic, and will cut them off if they’re in the business of fake-news fraud.
As Facebook has pointed out, there can be a fine line between “fake news” and satire, rumor, editorial and true-but-missing-context stories. The company’s four steps strike me as impressively fair and thoughtful. They aim to help readers sort out fact from fiction, without actually stifling anybody’s opinion.