I have a lot of experience in the software testing business, having been technical director at PC Magazine Labs, PC Week (now eWEEK) Labs, and a large private contract testing lab. Its not easy work to do well. As the size and complexity of the product being tested increases, the complexity of testing increases correspondingly.
So Im inclined to be sympathetic to the problems faced by companies like Microsoft when issuing patches to large, complex products like Windows 2000 Server. But my sympathy for the vendors is dwarfed by my sympathy for customers who may have to risk the stability or security or correct function of systems when applying patches.
Last weeks revelation of problems with the MS05-051 update struck me as a good example of the complexity problem. Not to forgive any of it, but a lot of it sounded like edge cases that are inevitable at times.
But then Thursdays revelation came about the DirectShow patch on Windows 2000.
This is a big, fat, obvious test point by any measure. Its hard to fathom how any reasonable test QA process could miss the fact that it didnt work.
And Microsoft is not alone with this problem. Prior to last weeks release of numerous, profound security updates by Oracle, David Litchfield of NGS Software went public with complaints that Oracle security practices had left severe vulnerabilities in place for over a year, and that quick-hack patches by the company didnt actually address the root problems.
Following the updates this week, Litchfield claims that the vulnerabilities they address are still exploitable.
And even smaller projects can have problems like this. The most recent updates to the Firefox web browser addressed some “regressions” from previous fixes, and its not the first time its happened.
Its a little easier to forgive an outfit like the Mozilla Project, but its still small comfort if you end up as the user who gets compromised by the bug.
Heres the real problem: Before new versions of major programs like Oracle servers and Windows are released, they go through large, often public test processes, mostly known as beta testing.
If you select a pool of testers large enough, combined with in-house and some contract testing, you are likely to hit all the important configurations that your customers will encounter.
This doesnt happen with patches. Because companies are anxious, for reasons with which I sympathize, to keep the details of vulnerabilities under wraps until the fix is available for customers, they cant have public beta tests of patches.
Its too easy to reverse-engineer patches to determine what is being fixed, and indeed this is how many exploits are written after disclosures are released.
But its clear that the testing resources of even the largest companies are not adequate to the task.
In January, Microsoft announced that it would allow certain volunteers to test patches, but if there was anything real to this project, it doesnt appear to have succeeded.
The answer could be to use external testers.
As Litchfields comments suggest, and assuming hes accurate, the perspective of vulnerability researchers could be valuable here, but they are only part of the solution.
The real bread and butter of the testing that needs to be done is to throw the software at a zillion configurations and test that a) the patch is effective and b) it has not had any adverse consequences.
Maybe it would help just to contract it out to some outside labs.
While the testing needs to be done under a strict non-disclosure agreement, Microsoft and Oracle and every other company with software at risk needs to be public about the efforts they go to in order to make sure that their updates are effective, safe and timely. Because it would be fair to assume the opposite right now.
Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983. He can be reached at larryseltzer@ziffdavis.com.