Computers, which mimic human behavior a little more with each step in their evolution, are now connecting for brainstorming sessions. The resulting pooling of processing power could lift enterprise computing to new levels of performance in a few years.
The collaborative, network-based model known as grid computing enables the sharing of data and computing cycles among tens of thousands of processors to create the ultimate brainstorming tool: a virtual supercomputer. Grid architectures have been developing for years in academic and research settings, but until now, there has been little demand for supercomputer levels of power at most corporations. The worlds largest hardware and software companies are betting the demand is about to change.
Grid architecture, hailed in many circles as the next great evolutionary step in computer technology, is a simple concept that becomes very complex in its implementation. Using IP-based networks, a grid links thousands of servers and desktop computers into a mighty computing engine capable of delivering vast amounts of computational power.
Born in research labs as a lower-cost alternative to supercomputers, grid computing is now ready for prime time, proponents say. It has already taken a few tentative steps into the enterprise, and a decade or so down the road, proponents said they expect the grid model to hit the public Internet, creating a popular, global computing resource of enormous power.
For corporate systems, grid computing represents enormous increases in efficiencies as the processing cycles of millions of computers that now sit nearly idle many hours a day—often consuming power just to run a screen saver—are harnessed for productive tasks, ranging from pharmaceutical research to geophysical exploration.
In fact, the much-anticipated efficiencies of grid architecture have sparked predictions of free PCs being handed out by governments and large corporations in return for exclusive rights to the machines excess processing capacity. The receivers only cost would be paying for the energy to run the PC. Already, a San Diego company, Entropia Inc., is building a distributed computing revenue model by paying ordinary Web users for surplus computer processing cycles, then brokering the cycles to companies and research institutions.
However, amid the buzz about grid computing, it can be hard to differentiate real potential from marketing hype. If you were to gauge the prospects for grid computing simply by the sheer number of press releases and announcements coming out of Compaq Computer Corp., IBM, Platform Computing Inc. and Sun Microsystems Inc. over the past three months, youd no doubt conclude the technology has arrived. Scenarios painted by these industry leaders have enterprises of all sizes, whether housed in a single facility or scattered around the globe, quickly and easily harnessing the excess processing cycles of every desktop and server they own to achieve supercomputer power at Web server prices.
While theres just enough truth to the PR to whet the appetite of resource-starved IT managers, a reality check is in order. Compaq, IBM, Platform and Sun, building on the open-source Globus Project, are indeed making great headway developing grid computing. And no one denies that grid computings potential power/price ratio justifies continued substantial investments in its development.
But claims by these companies may well be creating unrealistic expectations. In fact, several researchers and analysts warn there are serious limits to what this architecture can accomplish in a business environment, and from an IT standpoint, extremely serious—perhaps even terminal—obstacles may lie ahead.
The leading vendors in grid computing today are Platform Computing, based in Markham, Ontario, which has been developing distributed computing software since 1992, and IBM, in Armonk, N.Y., which since August has landed hundreds of millions of dollars in contracts to build grid infrastructures for universities and governments.
In August, IBM announced it had been selected by the British government as a prime contractor for its National Grid, a massive network of computers throughout the United Kingdom that will share high-energy physics data generated at the Center for Accelerator and Particle Physics at the Illinois Institute of Technology, in Chicago, and at CERN, in Geneva. That announcement came on the heels of news that the Netherlands had tapped IBM to build a grid that will aggregate the computing resources of five universities.
In November, IBM announced a collaboration with the University of Pennsylvania to build a grid network for the study of mammography. And last month, IBM and Platform collaborated with the French Myopathy Association to create a grid that is enabling tens of thousands of Internet users to contribute unused processing cycles of their home computers to help muscular dystrophy researchers map more than 500,000 proteins.
Granted, those initiatives are all rooted in research. But Ian Baird, Platforms chief business architecture and corporate grid strategist, said that the enterprise, too, is ripe for grid computing and that the technology is ready.
“There are a variety of uses for grid computing in the enterprise,” Baird said. “They relate to companies … that require intensive computational design—the aircraft, semiconductor and film industries, for example. Our technology is being used in all those areas today, plus for life sciences and in financial markets for risk analysis and assessment.”
In part, Bairds optimism about grid computing in the enterprise is rooted in the success Platform has achieved in corporate markets. He said the companys commercial software for distributed computing—known as LSF and LSF ActiveCluster—is running in about 1,500 of the Fortune 2000 companies. Beyond that, Baird foresees a market as vast as the Internet itself. “Demand for computational power and storage is infinite,” he said. “As you build virtual supercomputers, youre limited only by the imagination of the engineer. Youll continue to use up whatever supply you create.”
Likewise, IBM smells enormous potential in grid computing for the enterprise and is betting heavily on it. The company has committed to developing grid technologies for both its xSeries Intel Corp.-based Linux servers and its pSeries server line, based on its own Power4 processors.
“We expect the technology to find its way rapidly into more conventional commercial development,” said David Turek, IBMs vice president of emerging technology, when announcing the National Grid contract in August.
The question is when.
Mike Nelson, IBMs director of Internet technology and strategy, said it will probably be a couple years before the companys grid products are ready for general enterprise implementation.
“Weve got some more work to do to make Globus versatile enough for widespread implementation in the enterprise,” Nelson said. Still, he added, “many of the research projects under way now are almost identical to the needs of large enterprises, especially for companies doing drug design, geophysical prospecting and mechanical engineering.”
Whats more, Nelson said, the appearance of grid architectures will be the result of an evolution already under way. For example, he said, “the whole point of Britains grid is to open it up to business—first, a few researchers from companies working with the academics, but eventually, it will be opened up to more and more people within the commercial sector.”
In terms of competition, the most ambitious initiative IBM will face is Suns Grid Engine Enterprise Edition 5.3, a proprietary technology, now in beta release, for creating campuswide grids. Grid Engine has already been embraced by some key design and engineering companies, including Sony Corp. and Synopsis Inc., a leading designer of complex integrated circuits, to shorten design cycles and time to market.
Compaq, meanwhile, has announced it is entering the grid market in a partnership with Platform. Compaq will install and support Platforms grid technology on its AlphaServer Unix systems and Linux-based versions of its ProLiant servers.
What all these initiatives share is an open-source foundation known as Globus Toolkit, a set of software tools for building grids and grid-based applications. Now in Release 2.0, its tools and libraries for security, management of data and resources, communications, fault detection, and so on are all under development by the Globus Project, an international collaboration of researchers and programmers based at the Argonne National Laboratorys Mathematics and Computer Science Division, in Argonne, Ill.
IBM, which in 2000 committed $1 billion to commercializing Linux for its server platforms, similarly jumped into the Globus effort with both feet last summer, announcing that it would contribute undisclosed quantities of both funding and researchers to the tool kits development.
And Platform, which has built its business on proprietary commercial software for distributed computing, is now developing a strategy “to Red Hat” Globus, that is, to mimic Red Hat Inc.s business model for Linux by distributing Globus Toolkit with what Platforms Baird described as “a grid suite” of programs, plus commercial support in the form of “platform integration, configuration, installation, training, problem management, maintenance, documentation and support.”
The players see plenty of opportunity to build lucrative proprietary products on top of Globus open-source framework. Platform insisted it will maintain vendor and operating system neutrality, while Sun has chosen a closed, Solaris-based path. Yet both companies have announced partnerships with Avaki Corp., a Cambridge, Mass., commercial developer of grid software that manages processing power and data sharing across heterogeneous networks.
IBM clearly intends to build on the profitable proprietary open-source fusion it has achieved with Linux. Microsoft Corp. and Sun—both of which have announced they will build technologies on Globus Toolkit—seem to have much the same idea.
Globus and a consortium called the New Productivity Initiative are establishing “standards for protocols and APIs that will lead to plug-and-play modules for grid computing,” Baird said.
So whats not to like about grid computing? The most often cited drawbacks are bandwidth and security. Proponents tend to downplay the bandwidth issue.
“What you have typically is easily parsed jobs, packets so tiny that bandwidth becomes a nonissue,” Baird said. “Granted, not every job can be broken down that way, but a significant number can.” And at any rate, he and other proponents argue, bandwidth availability is growing at a relatively fast pace.
The far larger issue, they said, is security, which Baird calls “a major challenge—and one that the Global Grid Forum [an international organization of distributed computing researchers], NPI and all of us are looking at very hard.”
But proponents said that is exactly why the enterprise is the next logical focus of grid computing.
“[General Motors Corp.] may have data centers and large computing capacity spread across the United States and Europe,” Baird said, “but its contained inside the corporate firewall. In that case, grid security issues are just an extension of existing corporate security issues.”
IBMs Nelson said that whatever the hurdles, the open-source effort ensures that grid computing will eventually thrive in the enterprise. “Globus is the key to the whole thing,” he said. “Open source is the only way youre going to get everybody to buy into the effort because it doesnt favor one vendor over another.”
Whats more, Nelson said, its going to be painless. “With good security and accounting mechanisms,” he said, “companies are going to be billed for the cycles they use without even knowing where on the grid they come from. In a few years, computing cycles will be delivered like electricity is today. You pay the electric bill every month without ever wondering which generating plant it came from.”