The agile infrastructure.
Just the phrase conjures up images of gazelles running or contortionists bending themselves into pretzel shapes.
Come to think of it, I actually watched a man fold himself up and get locked in a glass trunk at a conference last year. I wasnt thinking agile at the time. My thoughts were more about pain mixed with a touch of revulsion.
But the term agile is on everyones lips these days. I actually had to debate with a client the other day over the use of the term agile versus nimble.
Clearly the term agility has entered mainstream business lexicon. Be it due to economic forces, technological forces or even, as recently witnessed, natural forces, the ability to change, or more accurately adapt to change is a highly desired attribute for any business organization.
I would argue that “agility” (the ability to adapt) is specifically important for IT infrastructure. And why you may ask? Can you name another industry that has demonstrated as much change at such a rapid pace for such a sustained period of time? Not likely.
I often wonder who has been forced to adapt more quickly, the user or the technology? I suppose that is more a “chicken or the egg” type of question.
I remember when the vendor use to create the technology and then go out and convince customers why they might need it. Companies depended on technology firms to tell them how to utilize the technology they were being asked to buy and then adapting themselves and their business processes to the technology.
I believe things have changed considerably however. Business leaders no longer need convincing when it comes to the value that technology can bring. They dont blindly follow the lead of the vendor simply because the vendor said so.
In many ways I think we can all thank the developers of the Internet and of course the designers of the World Wide Web for this change.
They were able to show the business world a shining example of how technology could be created that was in fact agile.
By this I mean that the Web/Internet succeeded in creating an architecture, a framework of standards and protocol layers that enabled massive amounts of innovation both by infrastructure hardware and software providers and by those that provide end user applications and interfaces.
Most importantly the Web/Internet accomplished this all while maintaining the overall stability of the system.
So how do we know when our organization has achieved agility?
When internally we can adapt our underlying infrastructure components or customer/user facing systems to change without sacrificing the stability (i.e., availability, security) of our business processing systems.
Change Has Many Dimensions
Of course change is itself a difficult concept. Most people associate change with growth. In the IT world we associate growth with scalability.
The truth is that change has many dimensions and nuances. Indeed one change we might have to adapt to is not one of growth, but one of declining growth. How do we adapt our infrastructure to meet that need?
So if the Internet created a successful adaptive networking layer, could it be possible to use those same concepts and principles to create an agile IT infrastructure?
As an infrastructure planner, we should study the basic ground rules that were used by the designers of the Internet. Could they be applied to our planning at an even lower level?
Concepts like each network must stand on its own with no changes required to connect it to the Internet or that there would be no global control at the operations level.
These types of concepts are now being adapted to the task of infrastructure planning.
The whole push behind service oriented architectures will provide the spanning layer that organizations will need to achieve agility.
But open source software also aids us significantly towards achieving agility. This is due to the fact that open source software, and open standards are mutual catalysts.
Open source databases like MySQL or PostgreSQL expand the usage of the SQL standard while the SQL standard made MySQL and PostgreSQL possible.
These days, when an organization is forced to scale its proprietary database infrastructure, they are faced with a dilemma. How do I scale my proprietary database infrastructure without incurring huge hardware and software costs as well as huge switching costs?
What Im seeing now are innovative planners that simply reject the premise of the question. Instead they ask why should I be forced to do either? They simply restate the problem to:
- How can I make my database infrastructure more agile?
- How can I reduce overall costs?
This has lead to some innovative designs where planners use open source database software to compliment their own proprietary database software.
They employ long-used tactics such as offloading read-only processing to replicated farms of open source servers. Thereby reducing the load on the proprietary database and with it the need to buy bigger servers and more database licenses.
They might even deploy open source software on commodity servers where it handles only a fraction of the processing required by a central server such as in the case of a retail chain with in-store processing requirements.
In either case, they have rejected the notion that open versus proprietary is an either/or situation. Instead, they have embraced the fact that both approaches add value and by utilizing both intelligently, we can achieve an improved measure of agility.
Remember, change (IT change anyway) always has an economic component. One reason that change is delayed, often has more to do with budgetary issues than it does with stability concerns.
So cheer up! Change will always be with us, but the tools that help us adapt more easily are either here already or are on the way.
Charles Garry is an independent industry analyst based in Simsbury, Conn. He is a former vice president with META Groups Technology Research Services. He can be reached at [email protected]