Should We Be Legally Obligated to Fix Vulnerabilities?

Opinion: Forensics make the cause of data breaches clear after the fact, but they are seldom easy to predict.

Security people deal with the scenario all the time: An organizations internal IT people find a vulnerability, or a third-party security assessment firm finds a vulnerability, but theres no leverage to get upper management to approve a fix.

The lack of legal obligation to fix known vulnerabilities is enough to get your blood boiling, particularly were you to have read a recent discussion on this topic on the blog of WhiteHat Securitys Jeremiah Grossman.

No one wants to see an organization compromise its data when it could have tightened its security before a breach occurred.

But hindsight is 20/20. Forensics may paint a clear picture of what went wrong leading up to a breach, but its seldom as simple to predict whats going to get you in trouble prior to it happening.

Ted Julian, vice president of marketing and strategy for Application Security, had a useful take on this when I talked to him recently.

First off, you could spend your whole day addressing potential Web application vulnerabilities, doing code reviews and pen-testing, but it still wouldnt help you identify an insider threat from a database administrator with evil on his mind.

Julian has had plenty of experience with the messy realities of identifying vulnerabilities, particularly when was at @Stake—a security company snapped up by Symantec back in 2004. Heres how he describes the reality of vulnerabilities in context:


To read about how a simple back up plan prevented the California ARB from being shut down after a hack, click here.

"The presence of a vulnerability may or may not mean anything. Thats especially true for Web vulnerabilities because the Web applications are incredibly complex. A vulnerability at one level in an application may mean nothing. The actual risk of data loss from that vulnerability is nil, based on how that application has been constructed.

"For example, if what youre trying to protect against is customer data being stolen from a database, you could try to achieve that by doing a code review of all your Web apps. You could try to achieve it by deploying a lot of firewalls and IDSs [intruder detection systems]. You could try to take down all your wireless networks. You could try to achieve that by inspecting security at the other end of VPN connections, if youre worried about people coming into partner network—a very legitimate concern."

It could be that none of those approaches may solve the problem. What if the problem is a DBA who has more than enough access to rob you blind?

By itself, a given vulnerability doesnt mean anything. It has to be evaluated in the context of a broader IT environment, and of the broader business, and only then can you make an informed decision as to whether or not it is a critical, code-red vulnerability. But without context, nobody can tell what the impact of a vulnerability is. It could be that the Web application doesnt touch the database, or it could be that even if a program is subject to a SQL injection, it might not create a risk for the database.

Web applications are particularly problematic given their complexity, Julian notes.

Not to say that people shouldnt look at these situations very, very carefully. But the presence of a high-level SQL injection vulnerability, just to pick one class of vulnerabilities, may mean absolutely nothing based on the class of application that forms its context.

"Theres only so much a product can do. At some point its going to be the judgment of the internal IT person or consultant to make that determination," Julian says.

So yes, while it does seem to be morally outrageous that theres no legal leverage to bring against companies who arent improving poor security posture, you have to pause for a moment and consider the messiness of real-life technology deployments. Of course, Grossman and other security assessors are more than qualified to determine what vulnerabilities should and could be ameliorated, and hence their moral outrage carries additional weight.

But given the computer illiteracy of legislators or the legal establishment as a whole, would you really want them to cook up laws that would dictate what you do with that complex soup that forms your infrastructure?

Case in point: Jammie Thomas, found guilty of illegal file sharing and ordered to pay $220,000 after five minutes of jury deliberation. According to news reports, jurors called her a liar, saying they didnt believe Thomas story that someone spoofed her IP address. Yet one of those jurors—one who flat-out called Thomas a liar—admitted that he had never been on the Internet.

Id say choose to convince the devil you know—your upper management—that you need to fix a given vulnerability, rather than look to gain legal leverage from the ignorant devils you dont.

Contact me at

eWEEK Senior Security Editor Lisa Vaas has written about technology since 1997.

Check out eWEEK.coms Security Center for the latest security news, reviews and analysis. And for insights on security coverage around the Web, take a look at eWEEKs Security Watch blog.