Internet Insight: The Paradox of Grid Computing

 
 
By Peter Coffee  |  Posted 2002-01-07 Email Print this article Print
 
 
 
 
 
 
 

Experienced aviators warn novice pilots, "the problem with a multiengine airplane is that sometimes you need them all."

Experienced aviators warn novice pilots, "the problem with a multiengine airplane is that sometimes you need them all." During takeoff, for example, failure of even a single engine is a high-risk situation—but with more engines, there is a greater chance of at least one failure.

Distributed computing systems, such as grid computing, involve a similar paradox: The more resources the system has, the greater the number of points where the system can fail or degrade—and the harder the task of ensuring adequate performance in all situations, without unacceptable overhead.

A computing grid faces four new tasks, in addition to whatever problems it was built to solve. The grid must discover and allocate available resources as their availability comes and goes; it must protect long-distance interactions against intrusion, interception and disruption; it must monitor network status; and it must initiate and manage communications among the processing nodes to make each ones needs known to the others. There is no single optimal approach to any of these tasks but rather a family of possible solutions that match up with different types of problems.

Delays in communication between widely separated nodes fall into two groups. Fundamental is the speed-of-light limit: A node at one location cannot possibly become aware of a change in state at another location in less than the straight-line, speed-of-light propagation time of almost exactly 1 nanosecond per foot of separation.

That sounds good until its compared, for example, with modern local memory access times of, at most, a few tens of nanoseconds. Tightly coupled applications, such as simulation or process control, are therefore disadvantaged in distributed environments "until science discovers a method of communication that is not limited by the speed of light," as Aerospace Corp. scientists Craig Lee and James Stepanek wrote in their paper published in April 2001 (which can be accessed via www.eweek.com/links).

There are problem decomposition techniques that arent as badly handicapped by the speed of light: for example, Monte Carlo simulation, or the kind of data parceling strategies made famous by the SETI@Home project, which distributes sets of radio telescope data for intelligent-life detection by screen saver software. When problems lend themselves to this approach, they often dont need frequent synchronization and therefore arent severely hampered by distance.

What does affect the latter class of problem, though, is the limited bandwidth of networks and network interfaces. Plotting recent progress, Lee and Stepanek in the paper cited earlier find network access bandwidth, as determined by available interface cards, doubling every 2.5 years, ominously lagging the 1.5-year doubling time of processor performance, assuming continued Moores Law improvement—which many project as likely through 2010.

With processor speed outpacing the ability of interface cards to send and receive to the grid, it follows that some processing power will be best employed in boosting information content per bit: for example, by continuing the refinement of data compression algorithms using techniques such as the wavelet transforms in the JPEG 2000 standard.

Data compression developments such as these are offset, however—perhaps to devastating effect—by the growth of data overhead entailed in the use of XML syntax to make data more self-disclosing than it is in application-specific binary data structures. Theres a difficult trade-off to be made between ad hoc availability of data for unanticipated uses and efficient, cost-effective packaging of data.

Sad to say, a great deal of processing power may also be consumed by the calculations needed to implement data integrity and security measures, such as encryption for authentication of messages sent and received. Grid computing, in an open environment such as an IP network, invites both attempts to read the mail between the nodes and to analyze the patterns of traffic for what they might reveal about concentrations of valuable information.

If network and computer are the same, it follows that the network—an inherently exposed asset—is increasingly the locus of IT value. Enterprise IT architects and service providers will have to learn to protect it without crippling its hoped-for performance gains.

 
 
 
 
Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel