New technology and commodity hardware widen grids' reach, but perception barriers remain.
Grid computing is an overnight success that has been almost four decades in the making.
Last months announcement of the WS-Resource framework, enabling grid resource management with standard Web services protocols, completes a convergence that began with the 1965 introduction of the first multiprocessor computer. Libraries full of bleeding-edge research have since paved grids way, developing parallel processing schemes to solve exotic and high-value problems.
Todays confluence of commodity components, burgeoning bandwidth and open-source systems software fills in the rest of the picture. Taken together, they make the enterprise case for grid computing, which is the connection of heterogeneous computing nodes using self-administering software that makes the nodes function as a single virtual system.
The last few years of eWEEK Labs reviews have tracked a steady course for key component technologies toward their present state of being ready for prime time. Grids have become a compellingly cost-effective means of delivering almost any conceivable combination of massively parallel computing capability, incremental application scalability and enterprise-class fault tolerance.
But grids now face a barrier of perception that is perhaps even more challenging than previous barriers of technology, with many mainstream enterprise professionals doubting grids applicability to their everyday tasks.
"Unless you have specialized needs like special-effects computing, heavy-duty financial market analysis, weather prediction or any other previously supercomputer-intensive field, I cant imagine that you would need or want the complexity of grid computing," said Jorge Abellas-Martin, an eWEEK Corporate Partner and CIO of the advertising agency Arnold Worldwide.
Plummeting prices of multiprocessor servers make Abellas-Martin and others look askance at the cost of connecting and coordinating separate machines. They wonder if that cost exceeds the savings they can realistically hope to achieve by improving their processor-utilization ratios through a grids adaptive allocation of workload.
However, what grids offer to these rightly skeptical enterprise users is an ease of letting compute power flow to wherever its needed, instead of being statically allocated by the capital spending of particular business units. The enterprise data center is well on its way to becoming a supplier of service rather than a custodian of hardware, as eWEEK Labs will explore next week in Part 2 of this special report. Grids are a key enabling technology, and their foundations are the subject of this weeks stories.
Abellas-Martin and others correctly note that certain types of problems have become closely associated with massively parallel machines. These types of problems are "embarrassingly parallel," in the words of Shahin Khan, Sun Microsystems Inc.s vice president of high-performance and technical computing, because theyre so obviously parallel in nature that "its embarrassing if you cant figure out how to do it."
Life sciences problems such as protein folding are the current frontier of compute-intensive efforts and are using multiprocessing power in enormous quantities with the aid of largely self-deploying multiprocessing support packages such as the San Diego Supercomputing Centers Linux-based Rocks.
eWEEK Labs met late last year with the team that assembled a world-class Rocks-based supercomputer in a matter of hours on a conference exhibit floor. Far from requiring a wave of underpaid graduate students to assemble over a period of months, a computer grid, in eWEEK Labs observation, can now be deployed on enterprise time scales with affordable human resources.
Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.