Google Finds "Limited" Misuse of Its Platforms During 2016 Elections

Google has launched new transparency and security initiatives to reduce risk of its online properties being used by foreign agencies to spread election misinformation.

Google Web Misuse

Google on Monday said an internal investigation had found evidence of its online platforms being misused by actors linked to Russian propaganda firm Internet Research Agency during the 2016 U.S. presidential election.

But the company described the misuse as being limited in nature and something that it is working on stopping altogether in future.

"Preventing the misuse of our platforms is something that we take very seriously," said Kent Walker Google's senior vice president and general counsel and Richard Salgado, the company's director of law enforcement and information security in a blog Oct. 30. The two executives said Google is working closely with government, law enforcement and industry in finding ways to stop the abuse.

Google along with Facebook and Twitter are testified Tuesday at a Senate Judiciary Committee hearing on extremist content and Russian disinformation online. Lawmakers want to know the extent to which Russian actors abused these platforms to spread disinformation during the elections and what the companies have been doing to address such abuse. Lawmakers are trying to assess the impact the misinformation campaigns may have had on the election outcome.

According to Walker and Selgado, Google's investigation found 18 YouTube channels that were likely associated with the Russian misinformation campaign. Altogether, the 18 channels uploaded a total of 1,108 videos with about 43 hours of content YouTube.

The videos garnered about 309,000 views in the U.S. between June 2015 and Nov. 2016. About 30 of them garnered more than 5,000 views. According to Google, the videos did not appear to be targeted at the U.S. or to specific groups of people within the country.

In addition, two accounts linked to the Internet Research Agency spent $4,700 on search and display ads on Google's online platform during the election. Google claimed it found no evidence that the ads were targeted at any particular group of users or specific regions in the country.

The 2016 elections marked the first time that Google gave entities across the political spectrum an opportunity to target ads based on search inferences of whether an individual was "left-leaning" or "right-leaning." But according to the company, such targeting was limited to U.S. advertisers with whom Google already had an existing relationship.

All political advertisers were also required to comply with applicable local election and campaign laws during the 2016 election cycle, the company said. The policies also required advertisers to ensure their ads met Google's own policies against discrimination, harassment or exploitation of sensitive events, Google said.

Starting 2018, Google will release data about who is purchasing election-related ads on its platforms and how much money is being spent on these ads. Google will next year also roll out a publicly accessible database of political ads on its Adword platform and YouTube.

In addition, Google is working on tougher verification standards for people who want to place political ads on its platforms. Because U.S. law restricts foreign entities from running election ads, Google will conduct its own checks to verify that people who want to run election ads on its properties are legally permitted to do so.

Jaikumar Vijayan

Jaikumar Vijayan

Vijayan is an award-winning independent journalist and tech content creation specialist covering data security and privacy, business intelligence, big data and data analytics.