Grids Can Be Great

By Peter Coffee  |  Posted 2005-01-03 Print this article Print

Opinion: If vendors make systems secure, reliable and capable of dealing with heterogeneity, grid computing might live up to the hype.

In the coming year, I doubt that any information technology topic will be more hyped than grid or utility computing. When I talk about grids, Im talking about a technical model; when I talk about utility computing, Im talking about the way that IT is managed and priced. These overlap, though, and both are great ways to get people excited again about buying more IT capacity.

Im not using "hype" as a term of contempt. Even good things can be hyped, and hype is almost independent of actual technical merit. Some things are less exciting than they seem and get hyped out of proportion to their significance; some things are more important than they are easy to understand, and they deserve more hype than they get.

Some things, moreover—and pay attention, this is the hard part—are at least as exciting as they sound, but not for the reasons that generate most of the buzz. It takes a real effort to ignore, or even refute, all the noise while staying focused on the real opportunity and the challenges that come with it.

So it is with grid and utility computing: By the time you finish telling people why these things are more difficult than they appear, you may have no energy left for the rest of the story—that is, the reasons why people ought to be excited despite the difficulties youve just explained.

John Easton, a senior software engineer with IBM Global Services, has described (in a three-part essay last month on IBMs DeveloperWorks site at the three-cornered arena in which a grid computing solution must compete.

1) It must deal with heterogeneity: The economic benefits of a grid are greatly reduced if only certain types of compute node can take part.

2) It must be secure: A highly dispersed grid may be transferring not only valuable data but also invaluable intellectual property in the form of mobile executable code across a networks links.

3) It must be reliable: That term may have a highly task-specific definition. Some situations may permit continuing task retries until success, while others may demand a guaranteed maximum response time.

Click here to read more about IBMs grid computing solutions. Grid reliability is critical and challenging. Even the guarantee of task completion implies monitoring capabilities that add to communication overhead. Guaranteed response time requires a more sophisticated strategy of dynamic, possibly speculative resource allocation and real-time event notification.

We begin the year of grid/utility hype, though, with a mind-share grab by Dell, EMC, Intel and Oracle, under the label of Project MegaGrid. Perhaps youll join me in seeing a red flag in the Project MegaGrid FAQ document ( when it says, "Adopting grid technologies can be done today with fast return on investment by taking these three steps:

1) Standardize on cost effective servers and storage utilizing Intel processors.

2) Consolidate your databases, applications, servers, and storage.

3) Automate database, server, and storage management."

The last of those three points, the need for automated management capability, is well-taken. In the same way that a true relational database needs to maintain metadata in relational form for self-monitoring and repair, a grid needs the management hooks that give its resources the greatest possible leverage.

Click here to read more about Project MegaGrid. But these vendors ought to know better than to wrap themselves in the flag of grid computing while prompting users to build a homogeneous grid of Intel CPUs running Oracle databases. Grid systems need to be envisioned, built and managed to use the variety of IT resources and data assets on hand.

Impressive returns can come from grids, as long as they provide pervasive security and task-sensitive attention to reliability. As I said, some things are as important as the hype suggests.

But Project MegaGrid looks a little too much like a self-serving campaign by vendors with their own ax to grind, and not enough like a candid confrontation of the things that grids must do if utility computing is going to get real.

Technology Editor Peter Coffee can be reached at

To read more Peter Coffee, subscribe to eWEEK magazine. Check out eWEEK.coms for the latest utility computing news, reviews and analysis.
Peter Coffee is Director of Platform Research at, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.

Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel