The concept of utility computing conjures up images of rack after rack of inexpensive computers—interchangeable commodity items where no one computer is more important than another. In general, this vision does not include thoughts of Microsoft software, but rather of free versions of Unix and Linux operating systems.
As you might imagine, Microsoft is loathe to give up on any potentially lucrative market segment, so it has been giving research utility computing capabilities to academic institutions with free or low-cost Windows Server licenses. But its clear that any potential profits would come from the business marketplace, not the academic world.
To establish a position in this market, Microsoft has taken a somewhat different tack than the Unix and Linux players. It has relabeled the market segment that it wishes to dominate as “high-performance computing,” and its planning on taking on the large-scale mainframe and supercomputer applications now running on Unix with a special version of Windows Server called Windows Server 2003, High Performance Computing Edition.
Dont hold your breath waiting, though. The best guess is that Windows 2003 Server, HPC Edition will ship in mid-2005, and as with any Microsoft product that doesnt have a confirmed ship date, that schedule is flexible, especially given the recent announcements that the new Windows File System would not be ready in time to ship with Windows Longhorn.
So, it seems likely that the HPC version of Windows Server wont ship until WFS is available and certified for use in the exceedingly demanding HPC environment.
This lack of product hasnt stopped Microsoft from releasing a 200-plus-page migration guide designed to walk administrators of large-scale Unix applications through the process of moving to Windows Server.
This solutions guide includes detailed information on configuring four types of high-performance computing environments using Windows Servers: symmetric multiprocessing, massively parallel processing, network of workstation and Web server load-balancing systems.
The guide isnt a hard-core technical document. Rather, it uses a soup-to-nuts approach to take the reader from the conceptualization of the project through deployment and operations of a Microsoft high-performance computing environment. The information in the guide is also suitable for building HPC solution from scratch that dont involve a Unix migration.
Microsoft Raises Utility Bills
But Microsofts entry into the utility computing market doesnt face as much of a technical hurdle as it does a financial one. The current users of large-scale utility or grid-computing environments primarily comprise academic institutions and research organizations. The cost of licensing Windows on dozens or hundreds of nodes would be prohibitive, so Microsoft will need to come up with a pricing model that reflects the clustered nature of this computing model.
But the alternate answer to the low-cost model that Linux brings to this market is to target industries that are not price-sensitive, such as the financial services market, where the value of the results is significantly greater than the cost of the computing environment. Convincing these businesses that Windows Server HPC will be a viable and cost-effective alternative will be the goal of Microsoft in building its HPC business model.
It would seem that Microsoft is still in the early stages of its HPC strategy, despite its Windows Server 2003 announcements. While a check of the Windows Server HPC home page shows quite a bit of linked information, much of it—such as the HPC FAQ—links back to information that was developed and released for Windows 2000 Server.
That FAQ, for example, is actually dated January 7, 2001, which is not something that gives you a lot of confidence that there is a great deal of activity at Microsoft focused on this topic.
Despite the pricing issues, Microsoft does have a lot to offer in the HPC world. Managing large-scale projects is always an issue, and the Microsoft platforms do excel at providing a high degree of ease of use in their management tools. Scalability is well-established, with definite metrics available that can show exactly what additional performance gains can be had in a given situation.
The Windows Server platform is also suitable in both scale-up and scale-out scenarios, with many cases offering the ability to combine both scale-up and scale-out configurations to solve a single problems set, all while maintaining the same management interface, regardless of the technical details of the hardware/software implementation.
The work Microsoft has been doing to simplify software deployment for Windows Server also will pay dividends in the HPC environment, where maintaining and deploying software across hundreds of computers is a daily part of the operation.
So, while it looks like Microsoft is playing catch-up in the utility computing marketplace, it already has many of the tools, platforms and applications that can make it a viable contender. Whether the company will be able to convince the buyer that it is a better choice remains to be seen.
Check out eWEEK.coms Utility Computing Center for the latest utility computing news, reviews and analysis.