Numbers never lie—except when they are used to draw false conclusions. And if those false conclusions are part of an IT security strategy, then nothing good can happen.
Just ask Brian Martin and Steve Christey, members of the CVE (Common Vulnerabilities and Exposures) Editorial Board, who at the upcoming Black Hat USA conference (July 27 – Aug. 1 in Las Vegas) will outline the ways they have seen vulnerability statistics misused over the years.
"Vulnerability stats are misused in many different ways," said Steve Christey, principal information security engineer in the security and information operations division at MITRE. "The most common error is to calculate and present the statistics without accounting for the different kinds of bias that exists in the original data. Many people who generate statistics are using somebody else's data, e.g., a vulnerability database that they do not operate themselves.
"There seems to be a common misconception that vulnerabilities are a naturally occurring phenomenon that can be easily and reliably monitored, like the weather or the study of disease within a population," he added. "Our industry is nowhere near that level of maturity."
In the 14 years of the CVE's history, Christey said he has been asked about five times how CVE collects and represents vulnerability data. Common assumptions include that a single CVE entry only covers one vulnerability, and that the CVE has knowledge of all published vulnerabilities. The reality, however, is that a single CVE entry may cover multiple vulnerabilities.
"Anybody who maintains a large vulnerability repository struggles constantly with maintaining consistency and quality, while simultaneously adjusting to the rapid change and growth in vulnerability research," he told eWEEK. "This can force some difficult or unexpected decisions that are not necessarily obvious to consumers, who may be using the data under faulty and dangerous assumptions."
Companies and individuals that analyze vulnerability databases tend to blindly accept the information inside as perfect and complete, added Martin, who is content manager of the Open Source Vulnerability Database.
"If the data they are working against shows only three vulnerabilities in a given product, the company may mistakenly assume it is a relatively secure product," said Martin. "In reality, all of the large vulnerability databases may have missed published vulnerabilities in the product, typically because they use a single channel to do so (e.g., their Web site). We routinely see this while digging up more vulnerabilities to add to our databases."
Some of the most secure products actually have a large number of published vulnerabilities, Christey said, because they are popular and under investigation by expert researchers. Most products don't get that type of special attention.
"The inherent insecurity of a product is better determined by the difficulty of finding a new vulnerability, combined with the number of skilled people who are looking at the product and the amount of human labor required to find the vulnerability in the first place," Christey said. "Too many products can be hacked with only 10 minutes' effort using simple techniques for the most obvious vulnerability types; that's the low-hanging fruit of vulnerability research, and we will show its impact on vulnerability statistics at Black Hat."
The message—treat vulnerability counts and claims that one Web browser or operating system is more secure than another with a healthy dose of skepticism, the researchers said.
"At Black Hat, we will go into details about why vulnerability counts have major systematic problems and should not be relied on without digging more deeply into the context," Christey told eWEEK. "Vulnerability counts are some of the easiest and most obvious statistics to generate, but they are fraught with peril, especially when used to compare products or vendors. Any study that uses vulnerability counts without extensive disclaimers or context should be regarded with suspicion."