The usually simmering open source vs. closed source debate boiled over recently following the leak of Windows source code on the Internet. And it boiled over here too.
Some 95 percent of the response to my column on the Windows source code leak and what it might indicate about the value of closed-source code as a security technique said that I didnt get the point: Since open source is open, it gets a better code review. Anyone can get the source, look at it and find problems in it.
Inherent in this argument is the assumption that closed-source projects dont get code reviews, or at least that they get inferior ones. Im not so sure this is true. In fact, theres no reason to believe that closed-source companies cant do a good code review, and not a lot of reason to assume that open-source projects are getting all the code review that people think they get.
Meanwhile, there isnt any official system for reviewing open-source code for security problems. Its one of those ad hoc, community arrangements.
Unquestionably a lot of checking happens; some from the same consultants who do “black box testing” of Microsoft products, and some from other open-source developers. Recently, however, an attempt to set up a formal organization, called Sardonix, to organize these reviews, essentially failed when funding dried up after nobody showed up to do the reviews.
A SecurityFocus article on the failure hints at the reasons: people dont want to volunteer to do the boring, rote parts of a real security audit. Instead, they want to find scary vulnerabilities and exploits, and then bask in the glory of having found them.
The only contributions to the project came from Berkeley grad students under the direction of a professor. This is actually a great idea for an academic-driven project, but it doesnt give me a warm feeling about the level of experience of the reviewers.
On the other hand, the people at Microsoft who do code reviews are paid to do it. How well they review code is related to their own review and their own compensation.
According to Michael Howard, senior program manager in Microsofts security business and technology unit, if a vulnerability is found in code you wrote or reviewed its going to noticed, and affect your own performance evaluation.
This strikes me as a pretty good incentive to be careful.
Who Does The Reviews
And its not just Microsoft that reviews Microsoft products. Howard told me that an extensive outside review of Windows XP SP2 is currently underway.
Since a recompile with new compilers is an important part of SP2, the review will include examination of the compilers too.
No doubt, many people consider that Microsoft is either lazy or stupid when it comes to security, and we all wish they had gotten better at it faster. From the information provided by Howard, it sounds as if Microsoft is very serious about security and is capable of doing it right.
Yet, serious problems persist in Microsoft product, just as they persist in open-source products. The reason is less that nobody cares, but that its hard to write good software thats free of security problems.
Admittedly I learned to program back in the Reagan administration, but nobody told me to look out for security holes then and I doubt many programmers cared until very recently. A good code review is no easy task, and besides, its not easy to focus on security needs at the same time youre trying to write a program that has some actual, useful goal.
Nowadays, minding security is something that has to be done, but its still not taught in many schools. Worse, its something few people know how to do well.
The one bug that has come out so far (as I write this) from the leaked source is a great example of how this all works. The bug was an integer overflow bug, potentially leading to execution of arbitrary code.
The code that was leaked was dated about 3.5 years ago, when few, if any people were aware of integer overflows as a potential security problem. A good code review, by the standards of 3.5 years ago, could easily have missed this problem.
Microsofts statement on the matter is that the problem was found and fixed in Internet Explorer 6, and it is completely plausible that a later review, with an awareness of integer overflows and their implications, found the problem. (Some would claim that Microsoft should issue a fix for the bug in IE 5.x, and still the companys official position has for some time been that all users should move to IE 6.)
On the other hand, the “OpenSSL ASN.1 parser insecure memory deallocation” bug, which was very similar to the recent Windows vulnerability related to the same ASN.1 standard, got comparatively little publicity, even though pretty much every open-source operating system uses it.
Every version of OpenSSL up to that point was vulnerable, which means it had slipped through for years. How could this have happened? Simple, because its hard to find these things.
Wouldnt it be great if the relationship of source code and security were as simple as some people make it. If you search the CERT Coordination Centers vulnerability database, especially when sorting by their severity metric, you see lots of platforms well-represented.
Open source doesnt make code secure, nor does closing source make it insecure.
Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983. Be sure to check out
More from Larry Seltzer