As cyber-threats continue to grow, so too do the costs and risks associated with cyber-insurance. In a 56-page report, insurance provider Lloyd’s of London and cyber-risk analytics firm Cyence provide metrics and insight into how to measure and evaluate modern cyber-risks. Among the top-line findings in the report is that in a worst case scenario for a cloud service provider, an outage from a cyber-attack could result in $53 billion in losses.
“Generally, an extreme event would be multiple days of complete downtime, something that has not yet happened but is possible in the future,” George Ng, CTO and co-founder of Cyence, told eWEEK.
The joint Cyence/Lloyd’s report also details a hypothetical scenario in which a copy of a zero-day vulnerability that affects a leading operating system is leaked or discovered by hackers due to human error. The hypothetical zero-day is then used by attackers for financial gain and broad attacks. In that scenario, Cyence has estimated that the worst case impact for insurers could be as high as $28.7 billion.
The recent WannaCry ransomware worm attacks that first started in May are not an example of such a zero-day leak due to human error, according to Ng. The WannaCry attack made use of a vulnerability that was publicly disclosed by a hacker group known as the Shadow Brokers and was patched by Microsoft in March. The Shadow Brokers had allegedly stolen the zero-day used in WannaCry from the Equation Group, which has suspected ties to the U.S. National Security Agency (NSA).
Ng noted that the Shadow Brokers release was an example of damage that could be caused by a responsible disclosure, since Microsoft patched the vulnerability before the leak. He added that there is a tail end of exposure risk from mass vulnerabilities.
“If the exploit were used stealthily before the patch was issued, advanced attackers could propagate laterally within corporate networks, add backdoors and become a persistent presence behind the firewall,” Ng said. “This means that mass exploits can contribute to increased breach levels even months or years after the patches are implemented, and well after the pile of attacks that exploit the vulnerability in the aftermath of its disclosure.”
While the report specifically calls out worst case scenarios and attempts to quantify the risk, the everyday typical non-catastrophic risks are easier to estimate, according to Ng. He noted that everyday cyber-risks are common, have occurred previously and as such are easier to directly measure.
“If you’re purchasing insurance or you’re an insurance carrier, you tend to be most concerned with large-scale issues that could cause liquidity or capital problems, such as these catastrophic events,” Ng said. “If you’re more focused on day-to-day operations or underwriting a single risk for a short duration, then common problems could be even more important because they are more plausible and occur more frequently.”
Ng added that cyber-insurance has gone from a niche risk to one that carriers are beginning to view as ubiquitous and a real opportunity for growth. As the market evolves, Ng noted that cyber-insurers will need to change somewhat as well. He suggests that better economic modeling of cyber exposures will enable more capacity for large exposures.
“Sophisticated analytics and economic exposure modeling will help insurers innovate in a data-driven manner that helps them build sustainable products that meet their customers’ needs,” he said.
Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.