Google this week reported progress on its efforts to flag and remove violent extremist videos from YouTube even as it admitted to making mistakes along the way.
In an Oct. 17 update on its YouTube blog, Google said its automated machine-learning tools are getting better at removing video content that violates content policies. The company also has more people reviewing potentially offensive videos and implemented tougher standards for videos that do not explicitly violate policy but are borderline extremist.
Google says that 83 percent of the videos removed in the past month from YouTube for depicting violent extremism were removed automatically rather than being flagged as offensive by viewers. That represented an 8 percent increase from the number of videos automatically removed in August. The YouTube team has also reviewed over one million videos in a bid to improve the automatic detection and removal capability.
"Inevitably, both humans and machines make mistakes, and as we have increased the volume of videos for review by our teams, we have made some errors," members of the YouTube team said in the blog. "We know we can get better and we are committed to making sure our teams are taking action on the right content."
The status update comes as pressure builds on Google to do more to prevent its platforms from being used to spread all kinds of misinformation and extremist material. The pressure has forced the company into a situation where any decision it makes is viewed as wrong by different quarters.
When the company has taken measures to block certain sites, videos and content, some have accused it of acting like a self-appointed censor. At the same time, others have accused it of not doing enough to prevent foreign agents, terrorists and hate groups from using its platform to spread divisive messages.
Just this week for instance a report in Salon basically questioned whether the company is choking off and essentially censoring progressive websites with a clumsily implemented effort to stamp out fake news sites.
The report pointed to sudden and plunging search traffic at several small liberal new sites as evidence that Google is deliberately tampering with its search algorithms in addressing demands that it do more to surface content from more authoritative sources.
Meanwhile another report, in the New York Times blamed Google for not doing enough to prevent fake news ads from being served, in of all places, fact-checking websites. The story blamed Google's automated ad placement system for the problems and warned about how the platform could be abused to spread disinformation and inflammatory messages.
Google is not the only one under the microscope. Facebook and Twitter are under similar pressure as well and are facing similar criticism.
Google's efforts on YouTube are a critical component of its strategy to prevent misuse of its platforms. In its update this week, Google noted that it has added 35 non-governmental organizations from 20 countries to what the company calls the Trusted Flagger program for YouTube.
The NGO's, according to Google, are critical to helping the company develop policies for flagging and removing extremist content from YouTube. The NGOs are experts in areas like hate speech, terrorism and radicalization. The newly added NGOs include the International Center for the Study of Radicalization at King’s College London and The Wahid Institute in Indonesia, Google noted.