In the coming year, I doubt that any information technology topic will be more hyped than grid or utility computing. When I talk about grids, Im talking about a technical model; when I talk about utility computing, Im talking about the way that IT is managed and priced. These overlap, though, and both are great ways to get people excited again about buying more IT capacity.
Im not using “hype” as a term of contempt. Even good things can be hyped, and hype is almost independent of actual technical merit. Some things are less exciting than they seem and get hyped out of proportion to their significance; some things are more important than they are easy to understand, and they deserve more hype than they get.
Some things, moreover—and pay attention, this is the hard part—are at least as exciting as they sound, but not for the reasons that generate most of the buzz. It takes a real effort to ignore, or even refute, all the noise while staying focused on the real opportunity and the challenges that come with it.
So it is with grid and utility computing: By the time you finish telling people why these things are more difficult than they appear, you may have no energy left for the rest of the story—that is, the reasons why people ought to be excited despite the difficulties youve just explained.
John Easton, a senior software engineer with IBM Global Services, has described (in a three-part essay last month on IBMs DeveloperWorks site at www.ibm.com/developerworks/grid) the three-cornered arena in which a grid computing solution must compete.
1) It must deal with heterogeneity: The economic benefits of a grid are greatly reduced if only certain types of compute node can take part.
2) It must be secure: A highly dispersed grid may be transferring not only valuable data but also invaluable intellectual property in the form of mobile executable code across a networks links.
3) It must be reliable: That term may have a highly task-specific definition. Some situations may permit continuing task retries until success, while others may demand a guaranteed maximum response time.
Grid reliability is critical and challenging. Even the guarantee of task completion implies monitoring capabilities that add to communication overhead. Guaranteed response time requires a more sophisticated strategy of dynamic, possibly speculative resource allocation and real-time event notification.
We begin the year of grid/utility hype, though, with a mind-share grab by Dell, EMC, Intel and Oracle, under the label of Project MegaGrid. Perhaps youll join me in seeing a red flag in the Project MegaGrid FAQ document (www.oracle.com/technologies/grid/megagrid.html) when it says, “Adopting grid technologies can be done today with fast return on investment by taking these three steps:
1) Standardize on cost effective servers and storage utilizing Intel processors.
2) Consolidate your databases, applications, servers, and storage.
3) Automate database, server, and storage management.”
The last of those three points, the need for automated management capability, is well-taken. In the same way that a true relational database needs to maintain metadata in relational form for self-monitoring and repair, a grid needs the management hooks that give its resources the greatest possible leverage.
But these vendors ought to know better than to wrap themselves in the flag of grid computing while prompting users to build a homogeneous grid of Intel CPUs running Oracle databases. Grid systems need to be envisioned, built and managed to use the variety of IT resources and data assets on hand.
Impressive returns can come from grids, as long as they provide pervasive security and task-sensitive attention to reliability. As I said, some things are as important as the hype suggests.
But Project MegaGrid looks a little too much like a self-serving campaign by vendors with their own ax to grind, and not enough like a candid confrontation of the things that grids must do if utility computing is going to get real.
Technology Editor Peter Coffee can be reached at [email protected].