SANTA CLARA, Calif.-By virtue of the $49 billion it made on software sales and service in 2009, Microsoft is clearly the world’s No. 1 provider in that business. But when it comes to competing in the burgeoning cloud computing infrastructure sector, it’s still an infant when compared with other vendors that have been focusing on this for years.
At the moment, this domain belongs to Amazon EC2, GoGrid, Rightscale, 3Tera, Google, SalesForce.com, Rackspace and Terremark-companies dedicated to the model. For now, however, Microsoft-which isn’t quite used to playing second fiddle to anyone-is still working its way up the cloud-computing infrastructure stairway.
At the IDG-IDC Cloud Leadership Forum at the Santa Clara Convention Center here, which ends June 15, Microsoft Senior Director of the Platform Strategy Group Tim O’Brien explained his company’s take on the economics of cloud computing to a standing-room only audience.
“The economics of cloud computing have everything to do with scale: Scale is where you get supply-side economies,” O’Brien said. “It’s around how you build out infrastructure and the economies you get by doing that, plus economies from the demand side, around the workloads and applications that are sitting on top of that infrastructure, consuming those resources.”
From where do these economies come?
“On the supply side, it’s about building data center capacity-at scale,” O’Brien said. “It’s not limited to third-party providers like Microsoft, Google, Amazon or anyone else. It’s really about anyone building a large infrastructure.”
O’Brien said that while the capex and opex costs of hardware (when bought in increasing quantities) and labor generally go down as a result of implementing cloud-based services, the cost of power is another issue entirely.
Old Metrics ‘Blown out of Water’ by Clouds
“In the old tried-and-true data center metrics and ratios we’ve used for years for how to staff a data center, you had a person for every terabyte of data, a person for every 100 to 150 servers, and a network administrator for every gigabit of bandwidth,” O’Brien said.
“And if you implement this set of [cloud infrastructure] capabilities, meaning automating management-for example, automating the updating, provisioning and patching of instances-you can see that we have individuals managing thousands and thousands of servers.
“All those old ratios just get blown out of the water, and it’s no secret that labor is the largest single piece of the server TCO pie.”
Good News, Bad News
In the cloud, that slice gets smaller and smaller, O’Brien said. But the bad news is that the cost of power gets larger and larger, he said.
“This [power cost] is no doubt the fast-growing piece of server TCO. So when you build out data centers to scale-I know that we’re doing this, and Google as well-we’re situating data centers close to cheap electricity, to get our arms around that power consumption,” O’Brien said.
Some of these mega data centers are soaking up 100 megawatts of power at any given time, O’Brien said. “In data center parlance, those are huge,” he said.
This all has direct implications on the public-versus-private cloud debate, he said.
“If you are implementing these cloud capabilities in a dedicated fashion, and you’re doing so in a data center that has 1,000 servers, the numbers [due to power consumption] are interesting. We’ve actually modeled this out: A per-server TCO in a 100,000-server farm is less than half of the per-server TCO in a 1,000-server data center,” O’Brien said.
So when a company implements a private, dedicated cloud system, they’re going to do so-more often than not-on a smaller scale than a third-party public cloud provider, O’Brien said.
“You’ll pay a TCO premium, if you will, and not get the full benefit of scale economies on the supply side,” he said.
On the demand side, the issue cloud computing really solves is bad server utilization, O’Brien said.
“The thing that drives bad server utilization is variability of the underlying workloads-either unpredictable workloads or workloads that need to be provisioned for peak load,” O’Brien said. “So this over-provisioning effect has put a lot of companies in a single-digit server utilization situation.”
It’s a fact that servers and storage arrays alike in any given data center commonly run at between 5 and 20 percent capacity. The unpredictability of workloads-often caused by good old-fashioned randomness, O’Brien said-force IT managers to over-provision for something unpredictable that might happen.
“The goal we want to get to is diversifying away all this variability,” O’Brien said. “What you want to do is put as many applications, as many workloads-just like you would an investment portfolio-all in one place, and you get a much smoother, much more predictable curve [output]. In a public cloud, you can do that in a larger scale; in a private cloud, you can do it at a smaller scale.”
Public cloud systems are set up for 85 to 90 percent utilization, O’Brien said. The benefits are obvious.
“The economic tailwinds are pushing us toward this public cloud model because the cost of computing is so dramatically lower, because of the supply-side scale and demand-side scale,” O’Brien said.