Of course Google isn’t going to take your word for it. What happens when you provide such feedback is that its engineers keep track of responses and use the preponderance of the input to schedule a review. In a few cases where something is a clear and present danger, Google may accelerate the review.
In his blog entry, Gomes explains how Google tries to limit sexually explicit or abusive autocomplete entries, hate predictions and dangerous actions. Also useful, Gomes includes a link to “How Search Works” article.
However what’s most useful about Google’s new search policy is that the company is employing humans to look at problematic results and judge whether they should included in Google’s results or not. Presumably, if they find content that full of hate, lies or blatantly inciting violence, then they will exclude it from search results.
The obvious question is whether Google will effectively kill fake news this way. As Gomes points out in his blog entry, people are always trying to game the system and it’s likely that fake news sites and the hate mongers will use Google's new ranking system to find better ways of getting their search results placed higher.
Even at this early stage it’s easy to try out Google’s new effort in a small way. Do a search for “Why is the sky blue?” and just below the little graphic you’ll see a tiny link that says “feedback.” Click that and you’ll get to provide your input into whether the result was useful.
Unfortunately, maintaining that the sky is blue will probably not be seen as controversial and it’s unlikely to result in hate speech. Searches for other topics don’t necessarily provide the same opportunity for feedback.
For example, search for “Did the Holocaust really happen,” and you get Wikipedia entries on Holocaust deniers, and you get a prominent result that takes you to an organization called Stormfront, which is related to the Ku Klux Klan and claims the Holocaust didn’t happen.
While the Stormfront site isn’t exactly fake news (fake history is more accurate), it’s filled with misleading information, phony references and unsubstantiated claims. However, there currently is no opportunity in the search results to tell Google that this is fake news or that it’s misleading or anything else.
The challenge for Google is both scale and focus. Google gets what Gomes says are billions of requests daily and while less than one percent are questionable, that’s still a lot of searches. Will Google be able to find its way through the sure mass of potential problems? And how will Google decide which questionable items are serious enough to require some action?
In one sense, Google is to be admired for trying to fight fake news, but on the other hand one wonders if it will be able to keep up the fight given the sheer volume of content and web publishers' ability to find new ways to game the system.