Tuesday is Microsofts monthly patch day, and Im going to make a prediction: more of the same.
Were going to see holes in Internet Explorer and Windows discovered by security research firms, who will get themselves quoted and otherwise mentioned all over the news because they discovered the problems and reported them to Microsoft.
One thing we wont see are vulnerabilities discovered by some arbitrary schmoe who read the Windows source code leaked to the Internet three months ago. Remember that? Yeah, its been three months already.
Many predicted that a flood of worms and hacks would ensue, but the practical result has been far more subdued. I know of one specific bug that was exposed very quickly—an integer overflow in BMP handling—and it had been patched months before. Im told that one or two other holes of similar impact resulted, but nothing to get your hair mussed about.
I wrote a column at the time suggesting that if a flood of attacks did ensue, it would demonstrate that there really is something to "security through obscurity." My logic was that the code had been out in binary form and scrutinized for years, so if the availability of source made a difference, then ... well, the availability of source makes a difference. I still think this is unassailable.
But very little has come out, so what are we to make of that? First, its worth pointing out that the vulnerabilities that have been revealed since then are all demonstrably not based on the rogue source code. All of them have come from legitimate researchers who would never touch the illicitly released source.
And while some of these researchers may have had legitimate access to the Windows source code for outside code reviews and other such purposes, they wouldnt get much more code-review business if they used the opportunity to work on public advisories.
In fact, according to Firas Raouf, chief operating officer of eEye Digital Security, "Most, if not all, of the critical vulnerabilities/attacks over the past six months that are remotely exploitable involved remote services like RPC, IIS, LSSAS, etc."
None of this code was leaked. The leaked code was all client-side code.
Not only is different code involved, but as Ive pointed out elsewhere, Microsoft is taking far too long to solve problems. Of the 20 vulnerabilities announced last month, six were discovered and reported by eEye, and all six were reported more than three months prior.
So, what does this mean about the availability of source code and the discovery of vulnerabilities? Its not so clear. I would argue that it shows that source code isnt really so important to security research, since the truth will come out anyway. From what Ive seem of open-source products, the timeline isnt all that different. Issues come up all the time involving very old source code.
But you also can make the case that source code would help research, and that perhaps all of the problems that came out over time from nonsource research of the code that was leaked would have come out faster had the source been available.
eEyes Raouf strikes the balance when he says that "having access to source code would make it easier for us to uncover vulnerabilities. But it is not a must. Were finding enough without access to the source code by taking an outsiders perspective on their discovery."
So, back to the big picture: Does access to source code assist in security research, or doesnt it? Maybe it does, but the experiment with the leaked Windows source is no evidence of it, at least for high-profile products.
Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983.
Be sure to add our eWEEK.com security news feed to your RSS newsreader or My Yahoo page:
More from Larry Seltzer