It all began in late 2001 with a plan from Microsoft and a few other companies to limit details of vulnerabilities, especially with respect to exploit information. Normal people, as opposed to those in the security business, are usually appalled to hear that many researchers include details of how to exploit new vulnerabilities, including handy code for implementing the exploit.
This initiative took a step forward recently with the announcement by a descendent group of a proposed “standard” for bug disclosure procedures, the Organization for Internet Safety. The OIS comprises 11 companies, specifically “@stake, BindView, Caldera International (The SCO Group), Foundstone, Guardent, ISS, Microsoft, NAI, Oracle, SGI, and Symantec” and was formed “to make it easier for security researchers and vendors to work together to fix security vulnerabilities.”
Theres a lot to be said for releasing details of vulnerabilities publicly before patches are available, basically to allow individuals to determine if they are affected. Very often, users will be able to mitigate the effect of the vulnerability even without the patch, although usually at the cost of some functionality. Consider the SQL Slammer worm, and think back to the time when it was discovered and before it was patched (six months before an effective exploit came out). If the nature of the vulnerability had been disclosed before there was a patch, administrators at least would have known that there was an open port that could be closed to block remote access to the exploit.
I just dont get the value of releasing exploit code for unpatched vulnerabilities. In fact, I dont get the point of releasing it for patched vulnerabilities. The best argument you can make is that it helps bring pressure on those responsible to patch the system; by the same logic, it brings even more pressure on users of the system to patch theirs. Ive heard the argument that it helps people to protect themselves against exploit, but this is at best a small consideration next to the problems it causes. In fact, the standard is clear (section 7.3.10) that advisories may include defensive information, but not information (section 7.3.11) that “could aid attackers in exploiting the vulnerability“. There is some overlap here, but there is also reasonableness: exploit code definitely makes it easier for attackers to exploit, and its not necessary in order to test for vulnerability.
The value of the “standard,” though, is a little hard to decipher. The participants are all responsible companies, and they cant make their practices binding on others. I suppose the idea is to make the procedures clear to everyone and also to make clear what is not acceptable, specifically release of exploit code and technical details of the bug before people have had a reasonable chance to protect themselves. Section 7.3.12 states: “The Security Advisory shall not include proof of concept code or test code that could readily be turned into an exploit, nor detailed technical information such as exact data inputs, buffer offsets, or shell code strategies.” But its not a standard in the sense that things like HTTP are standards because they facilitate interoperability between implementations of systems. This is more a matter of social pressure.
As mentioned in the SecurityFocus report on the topic, forbidden disclosure information appears on top security discussion lists, such as Bugtraq, all the time. Bugtraq is run by SecurityFocus which is owned by Symantec which is a member of the OIS. Do disclosures that are premature under the standard constitute a violation of the procedures by Symantec? Will they change the way such postings are handled on Bugtraq as a result?
Consider the discussion-group reaction to the SecurityFocus story, which was entirely hostile. There is a thick undercurrent of paranoia in the reactions. Writers seem to feel this is an attempt to pre-empt their free speech, when all it is is an attempt to standardize decent behavior. You dont want to follow these rules? Its a free country, and youre free to go on being a jerk, putting weapons into the hands of vandals; but dont get the idea that normal people admire what you do. Most white-hat arguments completely ignore the blatantly obvious downsides of their activities: They force end users to spend more time maintaining their systems than they might otherwise need to. They make vulnerabilities more critical than they otherwise would be.
The counterargument is that the exploits would still be out there and known to the underground (“black hats”) but that the good guys (the people making the counterargument) wouldnt have the information. This is borderline, if not outright, sophistry. Why would good guys need exploit code, especially in source form? Free speech is good, but not everything that can be said is worth saying.
Try this on for size: Even if the vendor ignores your report, its wrong to release exploit code. Its just wrong. Whatever benefit you can theorize for it, you are also making it easy for those so maliciously inclined to exploit innocent bystanders. You are an accomplice to that crime.
Security Supersite Editor Larry Seltzer has worked in and written about the computer industry since 1983.