Indeed, the National Institute of Standards and Technology (NIST) estimated that in 2001 $59.5 billion annually, about 0.6 percent of the gross domestic product was being lost because of software bugs. The Sustainable Computing Consortium (SCC), an academic, government and business initiative to drive IT improvements estimates thats on the low side. Its estimate is that defective computer systems cost U.S. companies alone over $200 billion annually. Yow!
Hardily a week goes by that we dont report a major software bug or security hole at eWEEK.com. As Gregory Tassey, the senior economist in charge of the NIST report says, "Software is at the extreme end in terms of errors or bugs that are in typical products when they are sold."
Jim Laruf, senior researcher at Microsoft Research, agrees, "Weve been writing software for about 50 years and we still produce software with a high numbers of bugs. Our tools have gotten better but the quality of our code doesnt reflect this."
The general rule of thumb is that it takes $10 to fix the bug during development; $100 to fix the bug in QA; a $1,000 to fix the bug during beta testing; and $10,000 or more to fix the bug post-deployment.
Why do we waste so much time and money?
Jim Johnson, chairman of the Standish Group, an IT investment-planning group, says its because, "Microsoft taught us we can get away with sloppy code. And, from the mid -90s to early 2000s, most programming was sloppy. Now theres a push back for better quality software both from fed-up end-users and from the sheer costs of bugs." All too often, programs are written in a hurry to hit unrealistic deadlines and bug-fixes are accepted as a natural part of the post-release process.
I wouldnt say it was Microsoft though, Id say it was combination of the factors that Johnson mentions and proprietary software.
My esteemed colleague Larry Seltzer disagrees. In his latest column, he comments, "Open source doesnt make code secure, nor does closing source make it insecure." To me, the bottom line is that the more eyes there are on the code, the better the chances are someone is going to catch a mistake. And, with open source you do get more, and whats more important: better eyes on the code.
No one may get paid for it, nor will their performance evaluation necessarily be effected, but as Eric Raymond points out in his classic The Cathedral and the Bazaar those factors may not be motivating open-source programmers. Instead, theyre motivated by a search for excellence and peer recognition. As Raymond says, the open "style greatly accelerates debugging and code evolution."
Why? Raymond explains, "One key to understanding is to realize exactly why it is that the kind of bug report non-source-aware users normally turn in tends not to be very useful. Non-source-aware users tend to report only surface symptoms; they take their environment for granted, so they (a) omit critical background data, and (b) seldom include a reliable recipe for reproducing the bug."
"The underlying problem here is a mismatch between the testers and the developers mental models of the program; the tester, on the outside looking in, and the developer on the inside looking out. In closed-source development theyre both stuck in these roles, and tend to talk past each other and find each other deeply frustrating."
"Open-source development breaks this bind, making it far easier for tester and developer to develop a shared representation grounded in the actual source code and to communicate effectively about it."
No beta test for a proprietary program can duplicate that experience.
Now, this doesnt work, however, for all open-source projects. For example, if an open-source project has only a handful of user/developers, there simply arent enough eyes to make a real difference to the final project. But, for popular open-source programs, such as Linux, Apache or Samba, open source ensures that the overall code quality will be better.
The real problem with programming for security, open source or closed source, is as Seltzer observes, darn hard to accomplish. In all the programming projects, I know of, with the exception of the open-source OpenBSD operating system, functionality and speed come first with security a distant second.
Thus, while open source doesnt necessarily make code more secure, it does promote the rapid evolution of better code. And this in turn, from where I sit, means popular open-source programs are inherently more likely to be secure than their proprietary cousins.
eWEEK.com Linux & Open Source Center Editor Steven J. Vaughan-Nichols has been using and writing about operating systems since the late 80s and thinks he may just have learned something about them along the way.