YouTube CEO Susan Wojcicki caught the tech community off guard Tuesday when she announced that the video-sharing service owned by Google parent Alphabet would begin posting Wikipedia entries next to conspiracy theory videos in an effort to give viewers a heads-up that the claims might be bogus.
About the same time, Google, which is also part of Alphabet, announced that it would no longer run web ads it considers to be deceptive, which includes what the company calls emerging threats such as crypto-currency and initial coin offering advertising.
Wojcicki made her announcement during a panel discussion at the South by Southwest conference in Austin, Texas. She explained that YouTube would begin running those Wikipedia entries next to conspiracy theory videos as a way to combat what it considers misleading information.
YouTube was the recipient of much criticism following a shooting in Parkland, Florida in which a conspiracy theory video claimed that the students being interviewed following the shooting were “crisis actors.”
That video was the top trending video on YouTube when it happened and it was then followed by research that showed that fake videos spread faster than factual material on YouTube. This creates a problem for YouTube, if only because its managers want the service seen as containing accurate and verified information.
The motivation for Google’s new ad policies was similar. Scott Spencer, Google’s director of sustainable ads, explained the new policy in a March 14 blog. “This year, we updated several policies to address ads in unregulated or speculative financial products like binary options, crypto-currency, foreign exchange markets and contracts for difference (or CFDs),” Spencer said in his announcement, “We also updated our gambling ads policies to address new methods of gambling with items that have real-world value (e.g., skins gambling).”
While it’s admirable that Google and YouTube are making an effort to ensure that the information they provide is true, it has caused consternation in some quarters. The Wikimedia Foundation, which is the publisher of the Wikipedia online encyclopedia, clearly wasn’t amused to find itself tasked to be the guardian of truth for YouTube, and said so in a series of Tweets.
Apparently, Wojcicki failed to let the Wikimedia Foundation know about its plans to start using Wikipedia articles as a sort of arbiter of the truth. While the announcement may have been a compliment of sort to Wikipedia, it’s not the site’s intended purpose. As Wikipedia noted in its Tweets, Wikipedia is an online, user-edited encyclopedia, not a news Website nor is it supposed to be a fact-checking site for breaking news.
Perhaps worse, because anyone can edit Wikipedia, such a use could lead to efforts by those people who produce and post conspiracy theory videos to also edit the Wikipedia entry to match their video. This would erode the accuracy of Wikipedia in some cases, and cause people to question its veracity.
Despite the faux pas of recruiting an unwitting Web service into helping YouTube get a handle on fake news, the fact is that YouTube and Google are at least trying to do something to solve the problem. Other services including Facebook and Twitter are still are still trying to come to grips with fake news.
YouTube is particularly susceptible to such misleading content because it’s primarily a video service. There, unconstrained by the need to tell the truth, bad actors are free to create sensational videos, make up stories, and even create fake newscasts in a cynical effort to get their points of view across. What they create may look real, but it’s not.
Another factor as work as YouTube tries to address this issue of phony and staged propaganda videos is that reality is rarely as much fun to watch as fiction. This means that those faked videos will always shove the truth aside in favor of lurid stories that attract viewers who want to be fooled or who want to cluck their tongues over obviously false material.
It’s important to note that fake news in this context refers to fictional stories that are made to look as if they are real by manipulating the images that appear in them. This is different from the description of fake news that means stories you don’t happen to like.
The misleading material that Google is trying to eradicate is probably a problem that Google won’t be able to solve, at least not completely. That’s because Google doesn’t have enough people and can’t hire enough people to screen everything for accuracy. So YouTube has to depend on its users flagging suspicious material.
Unfortunately, not everyone knows a lie when they see it and if it looks convincing, a lot of people believe it. During the 2016 U.S. presidential elections creators of fake news were able to produce stories that looked convincing, distracting the candidates from their core messaging while also convincing voters that a wide variety of wild stories about the candidates were true.
Now the question has to be whether Google, YouTube, Facebook and the other social on media services are doing enough and doing it soon enough. The mid-term congressional elections are already at hand, and the purveyors of fake news are ramping up to spread more of it. It remains to be seen whether YouTube’s truth-telling will be effective, or whether the attractive fantasies of the fake news creators will sway more viewers.
Unfortunately, I’m not convinced that running a Wikipedia article next to a video will help. It first requires that the user read the article, which may not happen for those seeking a video and that they believe Wikipedia’s presentation of the facts.
I think Wojcicki should be commended for trying to solve a serious problem. But her solution requires actual thought and I don’t know how much of has actually been done.