Just when you thought things might be slowing down, becoming more standardized and easier to manage, the software companies and their willing sacrificial lambs (yes, that would be you) have found new and exciting ways to complicate their lives.
And, make no mistake, complexity is the thing everyone craves, yet no one can manage.
Sure, with added functionality, we must accept some additional complexity, but why is that bad? Well, because almost any study completed in the last five years points to the fact that as much as 90 percent of all computing outages occur because of human error, not hardware failure.
It may occur because an administrator enters the wrong command in the wrong order. It may happen because of poor change management, poor data validation or someone tripping over a cord. The more complex a system is, the more intrinsically vulnerable it is, because even seemingly a minor change can cause havoc. So why do we keep falling into this trap?
The truth is that sometimes its the marketing, but sometimes we really just need that new functionality regardless of the complexity it brings. However, its important that when we introduce complexity into our IT systems we make sure our management procedures and our organizational structures are ready for the added burden.
Two examples of additional complexity being thrust upon us these days are the concepts of grid computing and Web services.
In the past, big change in the software industry happened usually because of a new killer app which created an entirely new sub market for application software (think customer relationship management, enterprise resource planning, etc).
Nowadays it seems like every major class of application software has been created, so no new big thing is on the horizon, right?
The truth is amazing things are being done that will significantly change the software market as we know it, but its not about a new killer app, its the infrastructure those apps will operate in.
Grid computing is a great example. The concept is that we can write code that can utilize a pool of cheap and expandable processing power to complete jobs more quickly and with more availability than ever before. This will supposedly provide the organization a more adaptive infrastructure, enabling the IT department to expand or shrink the infrastructure based on business needs. Sounds great, doesnt it?
The problem is that grid computing also requires applications that were designed to run on a grid infrastructure. Not an easy task when you consider the fact that the dominant programming model is the shared memory model (used in multiprocessor servers.) Since grid implies many servers, and many instances of non-shared memory, it requires a completely different programming model.
The bottom line is that your average enterprise application wont be running on a grid infrastructure any time soon. Ahhh, but here is where it really gets confusing! Which vendors concept of grid computing are we talking about?
Oracle, as just one example, has been pounding the drum for grid computing for almost four years, since the company introduced its RAC (Real Application Clusters) feature. Heck, even the name of its most recent release of database software, 10g, is meant to invoke its grid-like powers.
The beauty of Oracles marketing campaign has been its promise to link all the exciting possibilities offered by grid computing without the pain. In other words, run your existing applications on our “database grid” today. No changes required.
Indeed, Oracle has done such a good job marketing RAC that some customers I have spoken with believe you must have RAC (a $20,000 per-processor option) to run Oracles database on Linux. Based on Oracles sales numbers during the past two years, its obvious that Oracle customers are literally buying the RAC story.
Oracles RAC utilizes its own message-passing technique, called Cache Fusion, to enable the companys database software to operate across a cluster of servers but appear to an application as if it were a single database. So having more servers equals more availability, correct? Well, maybe not. More hardware and additional software code to support the cluster means more complexity. More complexity means more opportunity to fail.
It also means that the people who are managing the RAC environment need additional skills, and problem resolution becomes more difficult. So, do you need RAC? Probably not, is the answer for most organizations, but there are exceptions, and for those willing to spend the money, more power to you.
Web services is yet another example of potential complexity headed your way. The great news is that Web services will enable applications to communicate and share information between them.
This is obviously a very useful and necessary bit of technology built on top of several standards—XML, SOAP (Simple Object Access Protocol), WSDL (Web Services Description Language), UDDI (Universal Description, Discovery and Integration)—that will enable developers to more rapidly create new virtual applications by connecting bits of processing logic from a number of existing applications to create a whole new application.
I have perused a number of books on the subject, but the one thing I never see discussed is how one makes a virtual application highly available. What are the implications for backup and recovery of data? How do we manage security permissions and authorizations? Clearly, with the added flexibility, we have also created a new set of obstacles to overcome.
So the battle between stability and complexity rages on, and IT organizations must be careful about introducing additional complexity without having thought through its implications and true costs.
I suspect we will continue to see a growing trend towards infrastructure rationalization (fewer kinds of things) and consolidation (fewer numbers of things) as a means to reduce complexity. This trend is both useful and necessary if we have any hope of utilizing some of these new computing paradigms.
And if you simply cant wait, dont worry, there is always a consultant or vendor ready to lend a hand.
Charles Garry is an independent industry analyst based in Simsbury, Conn. He is a former vice president with META Groups Technology Research Services.