One name that doesnt come up too often when you are talking about grid or utility computing is Microsoft. Theyve staked some ground in the high-end space with their High Performance Computing initiative, but they arent really on the same playing field yet with the likes of Sun, IBM, or even HP, Veritas or Red Hat. But a licensing announcement made last week may signal a change in their status.
On Oct. 19, Microsoft announced that they would continue to license their server products on a per-CPU basis, and not on the per-core basis embraced by most major enterprise server software vendors. For those of you not familiar with the core vs. processor issue, the next generations of AMD and Intel Server-class CPU will contain two cores per physical CPU. There will be hardware support in the single CPU for what was previously a dual-CPU configuration (as opposed to the currently shipping Intel Hyper Threading technology, which makes a single CPU appear to be two CPUs to the operating system and applications).
The decision to continue charging for server software based on CPUs and not cores will make Microsoft server software a much more attractive alternative, especially to utility computing providers offering scalable solutions to their customers. Software licenses, especially for applications such as enterprise database servers, can easily run into the five- and six-figure range for large installations. For a service provider offering capacity on demand to their customers, there is the need to have sufficient software licensing to handle as many CPUs as a customer might plan to throw at an application. This means that an on-demand application model has to, in some fashion, pay for software licenses which might not be currently in use.
The utility computing vendor is able to amortize those license costs over all of the customers of a given application, keeping a small reserve that can be extended quickly, if necessary, but giving their clients the ability to immediately expand their application requirements. But as the dual-core CPUs begin to appear in 2005 and become commonplace in 2006, service providers will be buying servers equipped with technology at little or no cost over previous generation single-core CPUs.
This means that the servers running Microsoft server applications will be able to double their CPU power at no additional cost to the service provider—a cost savings that the service provider can pass on to their customers, use to improve their bottom line or some combination of the two. From Microsofts perspective, this means that service providers are more likely to offer Microsoft-based solutions, especially if their upfront costs are lower and they are able to gain a competitive advantage.
It also means that the implementation costs of scalable Microsoft solutions in a grid environment will go down. Individual servers will be able to scale more effectively, and the associated costs will be lower. Standardization on multi-core processors will lower the cost per cycle to the provider, giving them more flexibility in packaging their service options.
It will be interesting to see if Microsoft retains the per-CPU price model as the multi-core technology evolves. Plans are already being discussed for 4-way and 8-way cores (the idea of which may be what keeps other vendors who are tied to per-core pricing models wary of the potential revenue loss), although at that level of CPU density, the servers are more likely to be used for consolidation tasks. In consolidation and virtualization, there would be multiple operating system instances running—a configuration in which Microsoft (and most other operating system vendors, including Red Hat Linux) charge on a per-instance basis.