As secrets go, it wasnt very technical.
Matt Blaze, a respected security expert and research scientist at AT&T Labs Research, in Florham Park, N.J., published a paper last fall describing how to make a master key for an office building or a school. The method required one key for any lock in the building, access to that lock and a small number of blanks.
The attacker would need no special skills or tools, aside from a metal file, to create the master key, according to Blaze. Once his research hit the media in January, Blaze was inundated with angry e-mail from locksmiths accusing him of being irresponsible for publishing his findings. It turns out that the method Blaze described has been known among locksmiths—and criminals—for decades. The professionals were angry that the secret was now out.
Its an argument many in IT security know all too well.
As the issue of full disclosure versus secrecy—debated with religious fervor for years in the security industry—rages on, the parallels to the case of the angry locksmiths are clear: On one side are those who believe that the full disclosure of vulnerability information helps administrators secure their networks; on the other side are folks who say that publishing this data only helps attackers and that the benefits to the rest of the community are minimal.
“Full disclosure is the worst we can do, except for everything else,” said Bruce Schneier, chief technology officer and co-founder of Counterpane Internet Security Inc., in Cupertino, Calif. “I really believe that the reason people adopt the secrecy argument is that its much easier to understand. If I tell you this guy knows how to break into your house, your first reaction is to make him shut up. People confuse vulnerability information with the vulnerability itself. Everything is kept quiet, and nothing improves.”
Close on the heels of Blazes revelation came a brief crisis of conscience that led researchers at Next Generation Security Software Ltd., of Surrey, England, to reconsider whether to release exploit code with their vulnerability reports. Code that David Litchfield, the companys co-founder, included with his bulletin warning of the SQL Server 2000 flaw that the Slammer worm exploits was used by the worms creator as a template. This led Litchfield to write a message on the BugTraq mailing list wondering whether the practice of releasing exploit code did more harm than good.
Historically, this has been the crux of the disclosure debate. Few people question that there are legitimate uses for exploit code, such as testing potentially vulnerable systems or deconstructing the code for educational purposes. But opponents of full disclosure often say that the potential benefits of publishing such code pale in comparison with the harm that can be done by attackers with this kind of detailed knowledge.
Litchfield said he and his brother, Mark, will continue to publish sample exploits in an effort to give administrators and security specialists a level playing field in their battle against crackers. The decision was not one they made lightly, Litchfield said, but it was made easier by the hundreds of e-mail messages they received encouraging them to keep publishing exploits.
“There are people out there with a high level of intelligence developing, sharing and actively using exploits against [insecure] systems,” Litchfield said in a lengthy e-mail explaining his thoughts on the subject. “Regardless of motive, there is much to be learnt from these people and their exploits. But if this was the only source of information for those working in the security industry, then the bad guys would always be one step ahead of the good guys; and if theyre one step ahead, we lose and so do the organizations were trying to protect.”
AT&Ts Blaze agrees. “The existence of this method, and the reaction of the locksmithing profession to it, strikes me as a classic instance of the failure of the keep vulnerabilities secret security model,” Blaze wrote in an essay on his Web site. “Im told that the industry has known about this vulnerability and chosen to do nothing—not even warn its customers—for over a century. Instead it was kept secret and passed along as folklore, sometimes used as a shortcut for recovering lost master keys for paying customers. If at some point in the last hundred years this method had been documented properly, surely the threat could have been addressed and lock customers allowed to make informed decisions about their own security.
“Although a few people have confused my reporting of the vulnerability with causing the vulnerability itself, I can take comfort in a story that [scientist] Richard Feynman famously told about his days on the Manhattan Project. Some simple vulnerabilities and user interface problems made it easy to open most of the safes in use at Los Alamos. Feynman eventually demonstrated the problem to the Army officials in charge. Horrified, they promised to do something about it. The response? A memo ordering the staff to keep Feynman away from their safes.”