Google this week said it has initiated a thorough review of its ad policies amid a firestorm of criticism in the United Kingdom for allowing ads from the government and major companies to appear next to inappropriate content on YouTube and other Google properties.
In a blog post March 17, Ronan Harris, managing director of Google U.K., acknowledged the outrage directed at Google and vowed the company will make changes in the coming weeks. Organizations that advertise with Google will have more direct control over where their ads appear on YouTube and the Google Display Network, Harris said.
“We’ve heard from our advertisers and agencies loud and clear that we can provide simpler, more robust ways to stop their ads from showing against controversial content,” he said.
The controversy erupted after a recent investigative report by The Times revealed that advertisements from the UK government and dozens of major brands including Marie Curie, Mercedes Benz, the BBC and Transport for London were being placed next to YouTube videos from jihadist, neo-Nazi, KKK and other hate groups.
The ads, which are being placed in an automated fashion via a process known as programmatic advertising, generates over $7.50 for every 1,000 views for the posters and is inadvertently funding terror and other hate groups, The Times said.
The report has provoked considerable outrage among advertisers in the UK, some of whom have announced plans to pull their ads from Google’s networks until the company can guarantee better control over ad placement.
CNN reported on March 17 that the UK government has summoned Google executives to explain why taxpayer-funded advertisements were being displayed on inappropriate and controversial content. CNN quoted a spokesman as saying the British government has decided to temporarily restrict its ads on YouTube until it gets reassurances from Google that it won’t happen again.
The Guardian is another organization that is pulling all online advertising from Google after its ads appeared on YouTube videos promoting extremist causes.
In a report on its decision this week, the Guardian’s chief executive, David Pemsel, described Google’s ad misplacement as “unacceptable.”
The Guardian will withdraw its ads until Google can guarantee that such misplacement will not occur again, the report said. Pemsel noted the dominance of Google and its YouTube and DoubleClick brands and said the company owed advertisers the highest standards to avoid ad fraud and misplacement. He urged other brands to blacklist Google until the issue is resolved.
This is not the first time that Google has faced controversy over its programmatic advertising process. The company faces at least two lawsuits in the United States that accuse it—and others such as Twitter and Facebook—of providing material support to terrorists by allowing ads to run on jihadist videos. Families of three victims in the Orlando, Fla., nightclub shooting last year filed one of the lawsuits. A parent of a victim in the Paris terror attack last year filed the other.
Like other major internet properties, including Facebook and Twitter, Google uses an automated system to place advertisements on its properties and third-party sites.
As Harris noted in Google’s mea culpa on March 17, Google has millions of sites in its network and some 400 hours of video are uploaded to YouTube every minute. So advertisers and companies like Google that use programmatic advertising often do not know where the ads end up getting displayed.
Google currently offers advertisers and ad agencies tools to control where their ads appear. For example, a “topic exclusions” option lets advertisers instruct Google not to display their ads on pages containing specific topics. A “site category exclusions” option gives advertisers the option of doing the same thing with entire sites.
But given the sheer volume and velocity at which videos and other content get posted online, mistakes do happen, Harris said. “We recognize that we don’t always get it right,” he said.
“In a very small percentage of cases, ads appear against content that violates our monetization policies. We promptly remove the ads in those instances, but we know we can and must do more.”