What corporations need to do is suck the cycles out of the PCs they already own. That's where grid computing comes in.
Believe it. Grid computing is on its way. But even though the case for grids is strong, it may be years before were able to reap the benefits of grid processing.
Corporations are finally figuring out that the pervasive Pentium IIIs- and 4-based systems in most organizations are overkill for doing word processing and spreadsheets. Its also clear that the big honking SMP machines are way too expensive for many of the types of services they provide. SMPs are useful for running SAP, Siebel, Oracle and Exchange, for example, but theyre not suited for large complex workloads that may take several days to run.
Then there are high-performance computing clusters. When the Top 500 (www.top500.org) list of fastest supercomputers in the world comes out at the end of the month, many of them will be made up of low-cost Intel boxes running
Linux, though there will be some Windows-based clusters as well. These systems are used mainly by academic institutions, oil corporations and pharmaceutical companies for scientific and technical applications. They are known as Beowulf clusters, after the 1994 Linux project by the Center of Excellence in Space Data and Information Sciences, a contractor to NASA, to create a supercomputer out of low-cost, off-the-shelf systems.
PC and operating system vendors would like nothing more than to push computing clusters into the enterprise because they sell more units, services and tool kits. But these clusters are fundamentally the wrong way to go for most companies.
First, they usually require the purchase of new PCs. Second, hyperthreading and other performance-enhancing technologies in the processorsone of the main reasons for buying new systemsmust be turned off to run in a performance-cluster environment. Third, although there are vendor-driven efforts to encourage clustering in the enterprise, the best technology is driven primarily by academia and is not suited to general-purpose computing. And fourth, to run the performance clusters, organizations still need a big honking data center and the IT staff to go along with it.
Performance clusters are optimal for solving problems where large data sets need to be analyzed quickly. For example, Stanford is setting up a supercomputing cluster to handle calculations for protein-folding models and other scientific computing tasks. That system is made up of 300 Dell dual-processor, Xeon-based nodes running Linux. It will be about the 200th fastest-performing supercomputer in the world. Cornell Universitys Cornell Theory Center has done most of the work making clusters practical for businesses, but it is still a ways off from the mainstream.
What corporations need to do is suck the cycles out of the PCs they already own. Thats where grid computing comes in. The technology behind grid computing and HPCCs is clearly merging. In fact, the Cornell Theory Center is proving that grids of HPCCs can be used to solve mainstream business problems.
The next step for grid computingwhich is most commonly associated with the ultrapopular and utterly useless SETI At Home projectis the enterprise. Companies such as UD Networks, for example, have tool kits that can take application binaries and split up the workload without having to recompile them. Gateway made news late last year when it launched its Processing on Demand grid. Oracles 9i with Real Application Clusters fits in well, though we are far from running distributed queries on a grid. To their credit, Dell and Microsoft see the evolution of HPCCs into grid computing and are actively trying to solve general-purpose business problems.
So why arent we all merrily building grids? Most application vendors are sitting on their duffs. They dont care about grids or clusters because they cant figure out how to license their products on grids. Every business intelligence, data mining, analytical applicationbasically any program that can use some extra computing cyclesshould be made grid-aware. The fact is, grids can be immensely powerful money-savers. We wont be using grids until the vendors get moving, but we can all let them know: Its time for grids.
As the director of eWEEK Labs, John manages a staff that tests and analyzes a wide range of corporate technology products. He has been instrumental in expanding eWEEK Labs' analyses into actual user environments, and has continually engineered the Labs for accurate portrayal of true enterprise infrastructures. John also writes eWEEK's 'Wide Angle' column, which challenges readers interested in enterprise products and strategies to reconsider old assumptions and think about existing IT problems in new ways. Prior to his tenure at eWEEK, which started in 1994, Taschek headed up the performance testing lab at PC/Computing magazine (now called Smart Business). Taschek got his start in IT in Washington D.C., holding various technical positions at the National Alliance of Business and the Department of Housing and Urban Development. There, he and his colleagues assisted the government office with integrating the Windows desktop operating system with HUD's legacy mainframe and mid-range servers.