Facebook’s late and ineffective response to the breach of 50 million user profiles by Cambridge Analytica in 2014 has not only enraged many of its billions of users around the world, it has them wondering if they can trust the company to protect their data from commercial and political exploitation.
That outrage had many users vowing to shut down or at least abandon their Facebook accounts, while a selloff in Facebook stock hacked as much as $35 billion from the company’s market capitalization by March 20.
But it was five days since the user profile breach was widely publicized on March 16 before Facebook founder and CEO Mark Zuckerberg came forward with an official statement taking responsibility for the breach and promising fixes.
The data loss occurred when a researcher at data mining firm Cambridge Analytica offered to pay some Facebook users to share profile data for a research project the company was conducting. While those users were told that their personal profiles would be used, what actually happened is that the researchers also obtained the complete profiles of their friends.
Facebook apparently found out the misuse of this data in 2015 and asked Cambridge Analytica to erase any data it had gathered improperly, but according to an investigation by the New York Times, this never happened.
Zuckerberg and his company’s management had an obligation to verify those user profiles were really deleted when Facebook demanded it.
But he didn’t. Instead, when Facebook asked Cambridge Analytica in 2015 to erase that information, the company agreed, but didn’t actually delete anything. The leaders of Cambridge Analytica lied. Facebook didn’t check.
Now, years later, Facebook once again failed again to take quick, decisive action when faced with that reality.
“I’ve been working to understand exactly what happened and how to make sure this doesn’t happen again,” Zuckerberg said in a statement released on Facebook. “The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there’s more to do, and we need to step up and do it.”
Zuckerberg then laid out a timeline of how the breach happened, and how a Russian researcher, Aleksandr Kogan, had claimed he was doing a research project at Cambridge University, but in reality was sharing his data with Cambridge Analytica. Facebook trusted Kogan with its data, but failed to verify that it was being used according to its requirements.
“This was a breach of trust between Kogan, Cambridge Analytica and Facebook,” as Zuckerberg said.
Zuckerberg then described his plans for making it harder for Facebook apps to access user information, starting with an audit of the various apps that do this, including things like Facebook games, personality tests, history quizzes and the like. He said that sometime in the coming months, Facebook will make it possible for users to see what apps they’re signed up for and make it easy to revoke permission for those apps to collect data.
Shortly after Zuckerberg published his statement of contrition, Facebook released a detailed description of what it plans to do. Describing the limits that Facebook places apps ability to access personal information as “guardrails,” the announcement claims that such an abuse could never happen now. But then the statement admits, “Even with these changes, we’ve seen abuse of our platform and the misuse of people’s data, and we know we need to do more.”
Facebook listed six steps that it plans to put into place, starting with the audit Zuckerberg mentioned in his statement. Facebook also promised to be more transparent by telling its users about data misuse.
Then, in addressing what is one of Facebooks bigger security holes, it promises to turn off access for unused apps. This happens when someone uses an app once, perhaps not even realizing it’s an app and it then stays around collecting profile data forever. Now, Facebook says it will turn off access if an app is unused for three months.
The company is also promising to restrict what data is available when someone signs into a third-party service with their Facebook login information. Finally, Facebook is going to amp up its bug bounty program.
The Facebook announcement asserts that it was already planning to implement some of those changes before the European Union starts enforcing its General Data Protection Regulation on May 25. Unlike the U.S., the EU has some rules with actual teeth, which means that Facebook has no choice but to pay attention or face huge fines. But the truth is Facebook will have to deal with investigations and enforcement action EU regulators and the U.S. Federal Trade Commission.
Zuckerberg explained his rationale for making the changes now. “I started Facebook, and at the end of the day I’m responsible for what happens on our platform. I’m serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn’t change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward.”
What Zuckerberg doesn’t explain is why it took until the bombshell exposé by ITN and the New York Times in 2018 to take action. While Facebook made some changes in 2015 when it found out what Cambridge Analytica had done, it never verified the destruction of the data, and it never made the kind of changes that it’s making now. Many of those changes could have been made much sooner had the company decided that being proactive was better than being reactive.
Facebook also doesn’t explain how its going to change into becoming a proactive organization. Apparently the company is willing to wait until the next breach to fix whatever else turns up, rather than working to determine what weaknesses exist, and fixing them before they become a breach.
It’s the lack of urgency, the inability to be proactive and the apparent unwillingness to do more than the minimum that’s fueling the #deletefacebook movement that’s emerging across the internet, including on Facebook. This kind of erosion of its user base if it takes a strong hold could eventually kill Facebook if advertisers decide that use mistrust has become pervasive.
Facebook can prevent this if proves that it is sincere and proactive about protecting user data. But right now it looks like Facebook is more committed to protecting its business interests.
To restore some semblance of user confidence it needs more than guardrails and guidelines. It needs walls.