Patch Management: Stop Killing the Database to Save It

 
 
By Charles Garry  |  Posted 2005-07-13
 
 
 

Patch Management: Stop Killing the Database to Save It


With all the recent public disclosures of data-privacy breaches, the spotlight is shining brightly on database security. Databases are clearly under direct attack from hackers on an ever-increasing basis. To paraphrase a famous exchange, "Why do you hack into databases?"

"Because thats where the data is!"

In recent years we have seen an increasing number of alerts concerning vulnerabilities in database software. Of course, the most visible vendors are Microsoft and Oracle, and part of the reason is that they make up the majority of database systems installed.

Hackers understand market share as well as the next guy. As the number of reported severe vulnerabilities has risen, vendors have responded with patch-release schedules. The hope was to simplify the process for administrators and mitigate the impact on end users.

The difficulty is that the data in a database must be both available to those who are supposed to use it and unavailable to those who are not. Moving to a quarterly schedule of patch releases, as Microsoft and Oracle have done, simply means that end users get to schedule their outages.

Read Larry Loebs commentary here on how to improve Microsofts patch system.

Yes, folks, database patches almost always equal unavailability. So the solution to keeping our databases available only to authorized personnel … is to make the database unavailable to everyone.

Certainly, users are very concerned by the impact on business operations and personnel costs, both real and intangible.

The real costs concern the need to increase database administrative staff for larger IT organizations so that work weeks that already average 50 hours a week do not increase beyond the breaking point.

Intangibles include the cost of staff turnover due to burn out from the previously mentioned long hours, as well as the increased chance of extended outages caused by errors by overworked administrators working on the complex and very manual task of applying the patches.

Now, vendors are legitimately trying to balance the impact of applying these patches against the impact on a customer whose database gets hacked. Having said that, lets be honest: The database vendors have something to protect as well. Their brands! So why are they not doing more to help this process?

Many users still feel abandoned by their vendors, especially if they are still running critical systems on older versions of the database. Their only recourse is to upgrade to the newest version, which, of course, means more downtime for the end users.

The market would love to see a moratorium on new features, and more time spent by vendors on delivering robust patch management tools. Why not stop new feature development for 6-12 months and focus all development efforts on addressing a better patch management feature?

Next Page: Features for better patch management.

Better Patch Management Features


We already have some of the features; for example, Oracles Enterprise Manager will give the user basic information on what patches have been applied.

Vendors need to go further, however, and provide products that show not only which patches have been applied, but which ones have not.

A strong patch management tool should enable the DBA to apply multiple patches on top of each other, and to reverse individual patches if problems occur.

Finally, the tool should enable the DBA to push out patches to large numbers of database servers across the network. Oh, and can we have that without the need for the database to be stopped, whenever possible?

Im sure other things could be added to the wish list for such a tool, but lets start there. And, by the way, we shouldnt have to pay extra for these "features."

Click here to read about Microsofts release of a patch management suite.

The IT organization must do its part as well. One must invest in infrastructure to support change. On the pure hardware/software side of infrastructure, this means a well-planned and well-managed quality assurance test environment.

On the people/process side, a change-management group should be established, with the power to coordinate, serialize, document, communicate and approve change promotion for all groups effecting change to an application.

This group should require a detailed process flow for applying changes, and detailed plans for falling back or recovery (including expected time frames). Included should be a listing of users, applications and jobs potentially affected by the proposed changes. Be sure to use that list to clearly communicate the changes to those affected.

Finally, organizations must keep happy the people that make the change process work. Personnel should not be over-managed, and should be given the authority and responsibility to do their jobs.

Organizations should motivate and reward employees for meeting service level agreements by paying bonuses and giving additional pay for on-call duties. Organizations should also keep personnel educated and current by sending them to a related conference or course once a year. That is my suggested prescription, although it does have side effects.

Strong change management process improves many more aspects of a business than just implementing vendor patches, so talk to your administrator. Better yet, when an application isnt available, take it up with your customers.

Charles Garry is an independent industry analyst based in Simsbury, Conn. He is a former vice president with META Groups Technology Research Services.

Check out eWEEK.coms for the latest database news, reviews and analysis.

Rocket Fuel