Voters in the UK woke up May 8 to a new series of ads and television announcements warning about fake news in preparation for the June 8 elections in that country. The ads, which were announced on the BBC, ran in national newspapers and included a list of 10 tips for spotting fake news.
Facebook is also taking additional action, including the elimination of fake users on the network. According to statements provided to the press in the UK, Facebook has eliminated “tens of thousands” of fake accounts. Apparently the idea is that fake accounts can be used to either initiate fake news or to echo it.
In a white paper published a few days in advance of that effort, the Facebook security team announced in detail the challenge with which it’s dealing in regards to what the company calls “information operations.”
The Facebook security team describes such operations as “actions taken by organized actors (governments or non-state actors) to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.”
Facebook describes the role of those fake accounts. “These operations can use a combination of methods, such as false news, disinformation or networks of fake accounts aimed at manipulating public opinion (we refer to these as ‘false amplifiers’).”
There’s also a difference between those information operations and fake news, which Facebook is quick to point out. The difference, basically, is that fake news is a vast array of false information, some of which may include information operations but does not completely encompass it. Those information operations may use fake news, but they have other aspects as well, including coordinated methods to spread and amplify that fake news.
For its part, Facebook is also tinkering with its algorithms so that it can spot patterns that the company thinks indicate fake news. Those patterns include articles that receive very limited reshares, which may indicate that people who click on an article consider it to be of low value. In addition, the company is keeping an eye on large numbers of identical articles posted by seemingly unrelated accounts.
The problem with such articles is that they’re frequently tied to other accounts that automatically repeat postings. This makes it appear as if an article is popular when it’s not. This is what Facebook calls false amplification.
Facebook has been under increasing criticism as a conduit for fake news, and the company has been grappling for a way to control it. For Facebook, getting a handle on fake news is critical, if only because the credibility of the social network is at stake. Once the service becomes associated in the public consciousness as a purveyor of fake news, it will be difficult to reestablish credibility.
This is one reason why Facebook is spending the money to buy all of those ads in the UK alerting voters to the existence of fake news, and providing the means to tell what’s fake and what’s not.
But there’s more at stake than just Facebook’s reputation. The U.S. election of 2016 was plagued by fake news, including stories about human trafficking on the part of a candidate or the endorsement by the Pope of another. While it does not appear that fake news affected the outcome of that election, it became a distraction from the real issues at hand, and in at least one case, led to an attack by a deranged gunman on a pizza parlor in Washington.
While stories about the involvement of the Russian intelligence services in using fake news and other means to interfere with the U.S. and French elections are dominating the news, it’s not all about Russia. There are also purveyors of fake news (such as a group of teens in Macedonia) who are creating their stories for profit or simply mischief.
Still, there’s plenty of evidence that agents of the Russian government, as well as others unfriendly to the west, are deeply involved in creating fake and misleading information on social media, and Facebook is the favorite vector for spreading their influence. The service’s ubiquity along with its relatively limited ability to police its content make it a pretty safe bet that attempts to spread their word will be mostly successful.
The solution is going to be complex. The sheer scale of Facebook is going to make it hard to control. While improving algorithms will help, the one control Facebook used to have—human editors—has been abandoned by the company. At the time is was related to interference by those human editors in the priority ratings of stories, but that was a problem that could have been handled by better management.
This is an area which Facebook should rethink. While better algorithms can do much of the heavy lifting, you still need a way to definitively decide what’s fake news, what’s an information operation, and what’s not. While it’s probably possible to eventually develop a means of automating the process, what Facebook needs is a means of handling the problem immediately.
Going back to human editors to make those critical decisions may be Facebook’s best bet at fighting both fake news, and abuse of Facebook as a messaging and information platform.
CEO and co-founder Mark Zuckerberg said last week that the company intends to hire up to 3,000 more employees this year to monitor the network’s content and identify and intercept fake news before it becomes widespread.