Computer security is a strange mixture of intuition, guesswork, mathematical certainty and obsessive attention to detail. The professionals must try to convert specialized formulae into seamless programs without making any mistakes. If one metaphorical back door is left ajar, banks, hospitals, stores and everyone else pays the price.
Ordinarily, open source software flourishes in an environment with such complexity. Sharing the code encourages everyone to remove the bugs. But while many professionals who build computer security and encryption tools see the benefits of openness, some worry that the same source code that helps professionals close back doors and plug holes also makes it easier for attackers to locate them first.
So is the openness a help or hindrance? It depends on whom you talk to.
Many think the algorithms are so complex that only peer review and scrutiny can catch strange errors and inadvertent bugs.
But it really depends on the application, said Jim Bidzos, chairman at VeriSign and vice chairman at RSA Security.
"If youre talking about encryption, peer review of both algorithms and source code is good," Bidzos said. But "if youre talking about other administrative tools, such as authorization tables and procedures, then you wouldnt want an attacker to see it, and there is likely little benefit to source review."
Peer review of encryption code has revealed some startling weaknesses. Last summer, for instance, German cryptographer Ralf Senderek published news of a loophole in the way that a popular encryption program, Pretty Good Privacy, stored the keys. While the software protected personal keys, it did little to stop additional back doors from being installed. PGP Security quickly released a patch to the bug that was apparently introduced by mistake in 1997.
News of other holes in popular programs is common. Cryptographers in Czechoslovakia recently found weaknesses in an open source cousin, OpenPGP.
Some see these discoveries as proof that the system works. Bugs were spotted and fixed. Holes in closed source products also exist and theyre much more difficult to identify. There are numerous stories of bugs discovered in Microsoft products, for instance, as well as in the code of all prominent vendors. Giving everyone a copy of the source code guarantees that any search will be easier.
"My favorite example is Borland InterBase," said Jon Lasser, who helped create Bastille Linux, a version of Linux that is built with an eye toward plugging all security holes. Borland InterBase is a popular commercial database that was released as an open source tool. Seven months later, programmers discovered a previously secret back door.
"People were saying it took seven months to find it," Lasser said. "But before it was open source, it was there for seven years."
Conversely, shielding the source code from scrutiny does not keep it from falling into the wrong hands.
Theo de Raadt, a security expert and the leader of the OpenBSD development team, said that copies of important code from Cisco Systems, Microsoft and other prominent companies are frequently spotted circulating in the underground.
"Cisco has 800 employees with read access. Theres no way to trust all of those people," de Raadt explained. "Realistically theres no such thing as closed source. The people who are most capable are going to be able to get access."
Others argue that the most important job is ensuring that a talented person scrutinizes the code. Just opening it up is not enough.
De Raadts team at OpenBSD is concentrating on eliminating security holes by auditing the source code — a task that is also often taken on inside proprietary software companies. The audits have made OpenBSD one of the more popular open source operating systems for use on Web servers and firewalls.
"I dont buy the many eyes argument. Most of the people looking at the code arent qualified," de Raadt said.
But finding qualified people is not easy because theres not much money in auditing open source. Companies like Microsoft have an incentive to audit their code and remove bugs. If they dont, their customers will desert them.
"Experience has shown time and again that just making source available doesnt ensure that it will be reviewed — especially by experts competent to find vulnerabilities," said Steve Lipner, manager of Microsofts Security Response Center.
"In contrast, Microsoft invests heavily in processes, tools, testing and training to ensure that our code gets in-depth security review by people who are both paid and motivated to find and eliminate vulnerabilities. We also license our source code to enterprises and researchers who are motivated to review it and report any vulnerabilities they find," Lipner said.
So does openness help or hurt? De Raadt argues that most sophisticated attackers dont bother reading the source code because it can be too complex. "Most of the people who [are] attacking are behavioral analysts. They play around with the program and see how it feels," he said. "About a quarter of the attacks we see that are serious are discovered by reading code. The rest are discovered by watching the behavior and then they just aim . . . and see what happens."