When Paul Strong looks at todays typical enterprise data center, many times he sees the beginning of a grid computing environment. Businesses are bringing in hardware that offers scalability and resilience, and a large number of applications are taking advantage of that.
“The typical data center today is already a primordial grid,” said Strong, chairman of the Enterprise Grid Alliances Technical Steering Committee and a systems architect at Sun Microsystems Inc., in Santa Clara, Calif.
The EGA—a consortium of about 30 vendors and enterprise grid users—is one of several industry bodies working on grid computing. Last month, the EGA released its first Reference Model for enterprise grids, a tool for understanding data center components and how they relate to one another.
Thats a key issue as businesses start dipping their toes into the grid pool, Strong said. For grids to really take off in the enterprise, the view of data centers needs to shift; rather than viewing the components in silos, determined in large part by the applications they run, enterprises need to take a holistic view of their IT infrastructures.
They also need to embrace grid-enabling technologies, he said. “A large part of grid is breaking out of those silos and moving from a world where applications were server-centric and management was component-centric,” Strong said.
Some businesses are starting to investigate grids, particularly in industries, such as financial services and automotive, that employ compute-intensive applications. The automotive industry has been a key early adopter of grid computing, and it was one of five industries IBM, of Armonk, N.Y., targeted two years ago when it announced upcoming grid offerings.
General Motors Corp., of Detroit, employs a grid to ensure that its vehicles are safe and fuel-efficient by simulating aerodynamics, fluid dynamics, visualization and crashes. Grid computing enables GM to save the costs of physical crash tests, which the company said can range from $300,000 to $500,000 per test.
“You might save 100 or 200 vehicles in a program by doing simulation,” said Tom Tecco, global director of computer-aided engineering, computer-aided testing and controls at GM.
Tecco said GMs grid is based on IBM Unix systems. “What were using is spare desktop capability in one case, and in the other case its a concentrated server room that supplies the bulk of the horsepower for doing our calculations,” he said. GM runs mostly commercial software on its grid, which became operational in January 2001.
Grid computing has helped Freescale Semiconductor Inc. handle an aging infrastructure that was being burdened by more complex and expensive software.
The Austin, Texas, company wanted to maximize the hardware investments it was going to make. About three years ago, Freescale created four server farms around the world—in Austin, Australia, Israel and India—all of which can work as a single system. Freescale uses Toronto-based Platform Computing Inc.s LSF software to manage the workloads on the systems, said Dan Griffith, manager of Freescales comprehensive software asset management team.
“Before, engineers were running on a [limited number of] CPUs because thats what was available, and many times [they] needed two [software] licenses because it was taking twice as long to get it done,” Griffith said. “Now, every engineer has access to this hardware.”
The server farms have a variety of systems, either Sun servers running Solaris or boxes from a variety of other vendors running Linux, Griffith said. The Austin farm is the largest, with more than 1,500 systems. Platforms LSF software—which is based on the open Virtual Execution Machine architecture and lets users virtualize their infrastructure, from desktops to servers to mainframes—allows Freescale to manage and monitor the servers and applications running on them.