How Microsoft Assesses Future of Cloud Economics

Cloud strategist Tim O'Brien explains that while the capex and opex costs of hardware and labor generally go down as a result of implementing private cloud-based services, the cost of power out of the wall is another issue entirely.

SANTA CLARA, Calif.-By virtue of the $49 billion it made on software sales and service in 2009, Microsoft is clearly the world's No. 1 provider in that business. But when it comes to competing in the burgeoning cloud computing infrastructure sector, it's still an infant when compared with other vendors that have been focusing on this for years.
At the moment, this domain belongs to Amazon EC2, GoGrid, Rightscale, 3Tera, Google,, Rackspace and Terremark-companies dedicated to the model. For now, however, Microsoft-which isn't quite used to playing second fiddle to anyone-is still working its way up the cloud-computing infrastructure stairway.
At the IDG-IDC Cloud Leadership Forum at the Santa Clara Convention Center here, which ends June 15, Microsoft Senior Director of the Platform Strategy Group Tim O'Brien explained his company's take on the economics of cloud computing to a standing-room only audience.
"The economics of cloud computing have everything to do with scale: Scale is where you get supply-side economies," O'Brien said. "It's around how you build out infrastructure and the economies you get by doing that, plus economies from the demand side, around the workloads and applications that are sitting on top of that infrastructure, consuming those resources."
From where do these economies come?
"On the supply side, it's about building data center capacity-at scale," O'Brien said. "It's not limited to third-party providers like Microsoft, Google, Amazon or anyone else. It's really about anyone building a large infrastructure."
O'Brien said that while the capex and opex costs of hardware (when bought in increasing quantities) and labor generally go down as a result of implementing cloud-based services, the cost of power is another issue entirely.
Old Metrics 'Blown out of Water' by Clouds
"In the old tried-and-true data center metrics and ratios we've used for years for how to staff a data center, you had a person for every terabyte of data, a person for every 100 to 150 servers, and a network administrator for every gigabit of bandwidth," O'Brien said.
"And if you implement this set of [cloud infrastructure] capabilities, meaning automating management-for example, automating the updating, provisioning and patching of instances-you can see that we have individuals managing thousands and thousands of servers.
"All those old ratios just get blown out of the water, and it's no secret that labor is the largest single piece of the server TCO pie."

Chris Preimesberger

Chris J. Preimesberger

Chris J. Preimesberger is Editor-in-Chief of eWEEK and responsible for all the publication's coverage. In his 13 years and more than 4,000 articles at eWEEK, he has distinguished himself in reporting...