Microsoft is finally ready to enter the high-performance computing market, a technology dominated by open-source Linux technology.
The Redmond, Wash., software maker released Windows Compute Cluster Server 2003 to manufacturing on June 9, with general availability of the product scheduled for August.
It will be sold via volume licensing and OEM licensing for an estimated price of $469 a node, but prices will vary depending on the license and volume, John Borozan, group product manager for the Windows Server Division, told eWEEK.
Evaluation copies of Windows Compute Cluster Server 2003, a 64-bit operating system for industry-standard x64 processors, will be handed out to attendees of Microsofts TechEd 2006 conference in Boston the week of June 12, he said.
This is Microsofts first software offering designed specifically to run parallel, high-performance computing applications for customers, and it provides a platform that can be deployed, operated and integrated with existing infrastructure and tools. Customers can also leverage their existing development skills using Visual Studio 2005, Borozan said.
The upcoming availability of the Windows Compute Cluster Server marks a milestone for Microsoft, which is a late-comer to a market largely dominated by Linux software.
While Microsoft will release a single 64-bit-only version of the software, it will run on all the hardware platforms supported by Windows Server 2003 Service Pack 1, on which it is based.
All the major OEMs, including IBM, Hewlett-Packard, Dell and NEC Solutions America, as well as the major interconnect vendors, have announced support for the product.
Customer demand for HPC is being driven by increased performance in processors per compute node, the low acquisition price per node and the overall price/performance of compute clusters. These trends are driving new customers to adopt HPC to replace or supplement live, physical experiments with computer-simulated modeling, tests and analysis, Borozan said.
Analyst firm IDC says it expects unit shipments for HPC to expand by more than 12 percent annually over the next five years, and that high-performance computing clusters will see substantial customer adoption in the lower-end capacity segments of the market.
Uses of the Windows Compute Cluster Server by early adopters span oil and gas reservoir simulation and seismic processing; life sciences use for simulations of enzyme catalysis and protein folding; and vehicle design and safety improvements.
One customer, Cornell Universitys Computational Biology Service Unit, in Ithaca, N.Y., has adopted Windows Compute Cluster Server 2003 as a platform for computational biology applications of a wide range of research activities in bioinformatics, including sequence-based data mining, population genetics and protein structure prediction.
“Adopting Windows Compute Cluster Server 2003 was a natural step for us, since we use SQL Server for our database needs and Windows servers for hosting our Web interfaces,” said Dr. Jaroslaw Pillardy, a senior research associate at the Biology Service Unit.
“In addition to serving massively parallel applications, Ive found that Windows Compute Cluster Server is a convenient tool for serving the computational needs of many small projects, where installing the software, updating databases and managing other such tasks are much easier with this than on a set of separate computers,” he said.
Next Page: Taking HPC mainstream.
Page 2
Borozan said Microsoft is excited about the opportunity to take HPC to a mainstream audience and is working with ISVs that are category leaders in all the vertical markets it is going after: manufacturing, life sciences, geoscience, financial services, government and defense, and academia.
The software giant is also hoping that every researcher and scientist will have access to supercomputing power from their desks within five years.
“To do that we are making sure that it can integrate well with an existing Windows infrastructure, so it uses Active Directory and other Microsoft management technologies like Microsoft Operations Manager. The interface and general environment will thus be familiar to them in that respect,” he said.
The team is also working with other Microsoft products, such as Excel, to bring HPC into a more mainstream arena.
One example of this work is with the Excel client for particularly complex spreadsheets that require models for a lot of calculations. Those cells can be calculated at the back end on a cluster rather than by the desktop machine, bringing sizable performance advantages, Borozan said.
The upcoming release of Office 2007 also brings a new version of Excel that runs as a server application, and it is integrated with the job scheduler for Windows Compute Cluster Server so users can more easily run thousands of Monte Carlo-type simulations; improve the availability of those models; and expand them to a wider audience by making them resident on a server versus an isolated workstation, he said.
“IT in those environments is also better able to protect their corporate intellectual property by keeping the models on the server rather than having them distributed across multiple workstations. Those are the kinds of things that we think will make HPC more mainstream, and wed like it to become as pervasive a resource as printers are today,” Borozan said.
As HPC involves a lot of custom code, Microsoft recently added support in Visual Studio 2005 for developing parallel applications, so that product now has parallel debugging capabilities, he said.
With regard to the next version of the product, Borozan said while there is nothing to announce at this time, the development team is defining the feature set for that product. “But at a high level we want to make every researcher and scientist able to access this technology within five years,” he said.
Customers have told Microsoft that deploying an HPC cluster today is very complex, but the company believes cost and complexity should not be barriers to innovation and discovery, so that was its focus with this software solution.
On the cost front, Microsoft believes its offering is competitive with competing stacks because it includes components beyond the operating system, like the MPI layer and the job scheduler, which are not found in other solutions.
Customers also will not have to move off their existing technologies to use its product. “We believe that, with our entry, customers will start expanding their existing HPC resources to include Windows rather than to replace existing clusters,” Borozan said.
This is evident in the work Microsoft has done with Platform Computing, a provider of job schedulers. “We have worked with them over the past year to make our job scheduler interoperate with theirs so that the two can send jobs to one another,” Borozan said.
Customers who, say, have a Linux environment running Platform Scheduler could add a Windows-based cluster to the environment, and each could then send jobs to the other, he said.
The MPI layer is based on the open-source MPICH2 reference implementation, against which most ISVs have tested their code, and Microsoft has optimized it for performance and security.
Developers who have written an application for an HPC cluster on Linux and have used the MPICH2 implementation can easily port their applications to Windows, Borozan said.
Microsoft assumed all along that it would be going into heterogeneous environments, and so worked with Argonne National Laboratory, which is a U.S. Department of Energy Laboratory operated by the University of Chicago, on the MPICH2 implementation.
It announced partnerships with HPC Institutes worldwide last November. Those institutes traditionally did work on Linux and Unix, and there had been intentional sharing with the open community there, he said.
“We know we have a lot to learn and to contribute. We are contributing our MPI code changes back to the community. We continue to listen to customers and partners, but there are no plans to license any of our HPC technology under, say, a Microsoft Shared Source license,” he said.