YouTube and Facebook have begun using automated processes to block and quickly remove terrorist propaganda videos, Reuters reported June 25, citing two people familiar with the situation.
Until now, human editors have reviewed flagged content and removed it as they deemed necessary; the automated process can react far more quickly and effectively.
The companies aren’t commenting on the change, which the report said was encouraged by an April group phone call that included Monika Bickert, Facebook’s head of global policy management. She proposed a number of options for discussion, including one developed by the Counter Extremism Project, a nonprofit group that years ago developed software with the initial aim of identifying and removing online child pornography.
Hany Farid, a senior advisor for the Counter Extremism Project and a computer science professor at Dartmouth College, helped to develop the software and discussed it on MSNBC’s Morning Joe June 23, explaining that it recognizes “hashes” that comprise the so-called DNA of a video. While manually removing content has been likened to a game of “Whack-a-Mole”—it’s removed only to pop up again somewhere else—the hashes enable an algorithm to identify each individual video and block it from ever appearing again.
“It works fully automatically, it’s efficient, it’s accurate and it allows technology companies to effectively enforce what is already place, which is their terms of service, that this content is not allowed on their network,” said Farid. “We are simply creating for them a mechanism to enforce that in an automatic, efficient and accurate way.”
Farid said he’d welcome the government’s intervention as an “honest broker,” helping to bring together more parties—such as academics and nongovernmental organizations—to create a greater sense of urgency around the problem.
“But at the end of the day,” added Farid, “the technology companies have to decide we no longer want this content on our network. It is harmful for the network, it is harmful for society, and it is harmful for the world.”
Pushback against the technology comes from those who fear a slippery slope into further censorship—the exact concern that was voiced when the technology was first proposed as a solution to identify and remove child pornography, Farid said.
Facebook, which recently defended itself against accusations that it promotes politically biased content, was initially cautious of the automated process, according to reports.
“We are talking about taking off videos that show violent beheadings. That show calls to violence. … The worst of the worst. We absolutely should have dissent on the Internet. We should absolutely have discussion on the Internet. But there is no place, in my mind, of pictures or videos of young children being sexually assaulted or of people having their heads chopped off. I think that’s where we reasonably can draw the line,” Farid insisted.
Twitter did not respond to a respond to a request for comment, but according to Reuters, it is still evaluating the Counter Extremism Project’s software and “has not yet taken a position.”
In May, Twitter, YouTube, Facebook and Microsoft were among a group of technology companies that signed a Code of Conduct released by the European Commission and EU member states in an effort to curb online hate speech and violent extremism.
“Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and racist use to spread violence and hatred,” Vĕra Jourová, EU commissioner for Justice, Consumers and Gender Equality, said in a May 31 statement. “This agreement is an important step forward to ensure that the Internet remains a place of free and democratic expression, where European values and laws are respected.”