If Charles Darwin taught us anything, its that change isnt spontaneous. Protoplasmic amoeba didnt sprout legs overnight and start roaming the land in search of fast food. And new species have never just happened – “Hey, whats that? Lets call it a ferret.” Creation is the result of evolution, a progression of events that lead us to the present.
As it turns out, Darwinian law also applies to the virtual world.
Its fitting that the e-security problem on everyones minds and monitors caries the same name as something found in nature. The worm. Or the Lovsan/Blaster/RPC Worm, to be exact. Like its physical, fish-bait counterpart, this worm didnt pop up out of the ether.
Most IT departments have known about the Microsoft Windows flaw this worm exploits – the RPC-DCOM vulnerability – and the corresponding patch for weeks. So why is Lovsan spreading like butter on warm bread? The answers are many, but before we dive into them, its important to understand the chain of evolutionary events that lead us up to the birth of Lovsan.
A Brief History of the Progression of Events
- At some point, Microsoft was notified of the RPC flaw by the Last Stage of Delirium Research Group. The official date has not been disclosed.
- On July 16, 2003, Microsoft releases version 1.0 of the MS03-026 patch designed to fix the flaw.
- Within days, Xfocus, a Chinese technology research group, publishes the first exploit code online, which is designed to take advantage of systems that have not applied the patch.
- Because of the code availability, media, analysts and security industry experts start to issue warnings of an impending worm in late July.
- On July 30, 2003, the Department of Homeland Security issued an alert warning of a potentially significant impact on Internet operations as a result of the flaw.
- On July 31, 2003, The CERT/CC, a major reporting center for Internet security problems, issued its own advisory, indicating that research showed intruders actively scanning for and exploiting the RPC vulnerability.
- The Lovsan worm appeared on August 11, 2003
- By midmorning August 13, media reports estimate that more than 228,000 computers have been compromised.
So within 31 days between the first availability of the patch and the release of the worm, why did so many computers remain unprotected? Reasons undoubtedly include many of those heard before the Code Red and Nimda worms wreaked havoc on computer systems – a lack of time to manually patch systems, concerns over patch interference with existing applications, and confusion about patch versions and which service packs should be used.
While these are legitimate reasons, it seems clear that they are no longer acceptable in an economy where companies cant afford to lose time or resources due to computer failures. Stories like the Maryland Motor Vehicle Administration, which had to shut down all its offices Tuesday, and the city of Philadelphia, which also was knocked offline by Lovsan, are examples of organizations that not only lose productivity and revenue, but inconvenience a slew of customers as well.
Despite beliefs that large corporations and government agencies have the time and talent to deal with computer vulnerabilities, many companies still handle patch management manually, a nearly impossible task considering that new patches surface daily. As a result, network administrators fall behind, and the necessary patches arent in place when a worm or virus hits, causing companies to scramble to patch or repair as best they can.
And confusion over which patches are the right patches to apply also is an issue. Most patches are dynamic, which means that they are always being modified and updated by the companies that release them.
Many of these issues can be avoided through a proactive patch management approach. New tools available today have eliminated many of the manual aspects of security management and offer extensive features to help users understand, test, and apply the right patches. Thats why patch management has become a strategic enterprise solution.
But the most effective patch management software should make patch scanning and remediation extremely straightforward, as well as accurate and secure. Important features to look for include: auto-deployment, offline support, knowledge management features such as patch annotation, a shared back-end database to facilitate collaboration, and patch management tracking to compare progress against existing enterprise security initiatives.
It seems that on some level, being proactive about patching and heeding the warnings of industry gurus may be the best way to avoid exposing vulnerabilities. After all, it only takes one imperfect line of code in millions to create a security vulnerability. This, like evolution, is a law of nature that just wont change, no matter how much time goes by.
Mark Shavlik is President and CEO of Shavlik Technologies