Utility computing, the IT vendor pitch of choice for selling more hardware and services this year, is often misperceived—or actually misrepresented—as either a technology or a business arrangement. Its true that the utility model is enabled by new technologies and that it promotes the development of innovative service provider relationships. However, utility computing itself, whether marketed as On Demand by IBM or as N1 by Sun Microsystems Inc. or by any other name, is fundamentally a management approach that can and should change the enterprise view of what an IT investment is buying.
Last week, in Part 1 of this special report, eWEEK Labs examined grid computing as a key enabling technology for the utility computing model of IT use. The “utility” label is often misused as a synonym for grid computing, but the concepts share a common goal of making IT power as incrementally available as watts of electricity from the power grid.
Indeed, the commodity components, pervasive connections and standards-based system software that combine to make grids a cost-effective option also make it feasible to parcel out the power of those grids in utility fashion.
The heterogeneity and dynamic nature of grids are challenges, but meeting them lays a firm foundation for the 24-by-7 quality of service that utility computing requires. “The goal of the grid is to assume heterogeneity instead of trying to accommodate it,” said Frank Martinez, chief technology officer of service fabric provider Blue Titan Software Inc., in San Francisco. “In a homogeneous cluster environment, its more difficult to provide availability assurance.”
At the same time, however, its just as much in the nature of utility computing to carve up a single, high-capacity superserver—with proven fault tolerance, diagnosis and other high-availability features—into multiple virtual machines using technologies such as those from the VMware subsidiary of EMC Corp.
Ideally, a business unit should have no need to know which approach is being used to meet its requirements. Utility computing benefits from grids but is not limited to doing what grids do well.
Moreover, utility computings benefits are not even confined to the compute-intensive aspects of the enterprise IT portfolio. The utility approach is already having an impact on storage, output and every other facet of what enterprises do with information.
Inside and Out
Inside and Out
Many IT buyers believe that utility computing implies an outsource relationship with a computing service provider. However, the utility model can be equally well applied to owned IT assets.
“The most common model is that a business unit has a service it wants to deploy, it calculates the amount of infrastructure that it needs to achieve the level of service that it needs, and the IT department inherits that and winds up with a hodgepodge that later needs a consolidation project,” said David Nelson-Gal, Suns vice president of N1 and availability products, in Santa Clara, Calif. “Businesses need to evolve to the point that the IT department delivers service levels and charges back to the business units for capital based on service delivery.”
Dave Roberts, Inkra Networks Corp.s co-founder and vice president of strategy, in Fremont, Calif., boiled down the utility concept in similar terms: “You take virtualization [technology], add on-demand [manageability], add charging [business infrastructure], and thats utility,” Roberts said.
Enterprise buyers are driven by the economics, not the technology, of utility computing.
Utility approaches “will fuel the growth of scalable services,” said eWEEK Corporate Partner Michael Skaff, manager of IT for digital media network builder AdSpace Networks Inc., in Burlingame, Calif. “Resources that were previously locked up elsewhere can be redirected, and companies will be able to further specialize in their core competencies and purchase peripheral functionality as a service.”
Making IT Manageable
Making IT Manageable
The virtualization tools of utility computing can prevent the corruption of production environments by preproduction or multiversioned code.
“We can run conflicting versions” of major applications, said Paul Little, configuration manager for the Fidelity Information Services division of Fidelity National Financial Inc., in San Diego, describing his use of application virtualization technology from Softricity Inc. “We were able to eliminate 13 servers that had been set up just to support different versions of our application.”
Fidelity is considering moving this technology into operations, enabling side-by-side provisioning of tailored versions of an application to multiple external users on a single, load balanced platform.
But Inkras Roberts is quick to emphasize his belief that the technical capability to share resources through virtualization models is only the start of creating compelling utility offerings. “In my mind, people should not look at it as getting fewer servers to drive down capital expenditure,” Roberts said. “They should be looking at a more efficient model that lets them adapt and change far more easily than they do today.”
As hardware costs decline, the costs of any hardware management solution must be tightly controlled to preserve overall cost savings. Roberts said this was the reason Inkra spent a lot of time working on the management model. “That was the single biggest thing,” he said. “We make hardware, but were a management company. We built a box that could be managed the way people want to manage things.”
Inkras focus is network management, embodied in products such as the Inkra Virtual Service Switch, which eWEEK Labs reviewed in November (see labs.eWEEK.com). That review noted the “reduced number of physical appliances, management interfaces and policy configuration plans” that enterprise IT managers might expect to enjoy from Inkras product and other offerings of this kind.
Indeed, manageability should be top of mind for anyone seeking utility computings benefits. Assuming that hardware driven by Moores Law and software driven by open-source innovation both trend toward being free, the value for which enterprise IT buyers will pay is increasingly going to be in the form of superior manageability.
Buying in Bulk
Buying in Bulk
At a minimum, however, any pitch for a utility computing initiative should also offer economies of scale and a lowering of peak-to-average ratios. When IT is measured by the cycle, not by the box of CPUs or disks, any given enterprise or business unit should see its IT costs reflect something closer to bulk prices and average needs than to boutique prices for meeting peak demands.
Whether IT assets are being shared merely with other business units or among a larger population of a service providers clients, the utility model should be able to “allocate resources without a screwdriver,” to borrow the words of Inkras Roberts.
That flexibility implies the ability to “follow the sun,” in the words of Edouard Bugnion, chief architect at EMC subsidiary VMware, in Palo Alto, Calif. That is, to let IT capacity flow from one business function to another or from one geographic region to another during the course of the day or in other recurring cycles.
“If you stick to physical management of resources,” said Bugnion, “youre limiting yourself in how quickly you can deploy new services or respond to changing business needs.”
Rather than looking at utility computing merely for cost reduction, Bugnion suggests that there are affirmative benefits in letting compute power flow without friction from one use to another. In conversations with enterprise users, for example, eWEEK Labs has found great interest in using utility approaches to give overnight access to realistic test environments for application development teams, instead of using todays more common approach of having dedicated testbed systems that fail to reflect the challenge of full-scale operations.
That flow of capacity to where its needed, instead of hardware sitting idle while useful tasks go begging, is what utility computing is all about.
Technology Editor Peter Coffee can be reached at [email protected].