Rethinking Compute Power in the Data Center - Page 3

critical for ROI">

You mentioned the desire among businesses to get a return on their investment. However, your technology is pricey. How do you address the cost issue with customers?

Our 96-way has a list price of $89,000, so if you think about that on a per-processor basis, youre talking about $1,200 per processor. Thats pretty cheap. Go look at your favorite Dell [Inc.] box and look at what that costs on a per-processor basis. Part of our whole economic value proposition was to reset the commodity line in the industry. As an industry, we all admire Dell and we admire the sophistication that Dell has in their business model. They made 18.6 points in gross margin last quarter, and theyre the kingpin in terms of operation efficiency.

But if you think about that as the commodity line in the industry, part of the hypothesis around Azul was that we would be able to fundamentally transform the economics associated with processing power. That involves a lot of things. First of all, it does involve your traditional price/performance metrics. We have to be very [strong on price/performance] relative to existing technology, but the big home run for Azul is the total cost of ownership.

Any infrastructure play, whether its storage, networking, database, etc., all eventually boils down to a TCO play. While our capital costs are extremely competitive, ranging from an $89,000 96-way up to a half-a-million 384-way—which is pretty amazing relative to traditional big iron—the big win for us is the fact that, first off, customers are seeing significant host-reduction factors. We are seeing at a minimum 3X in host reduction factors in virtually every eval or existing customer that weve gone to. This means theyre able to reduce their host front end by a factor of three. In many cases, weve seen host reduction factors well up in the double-digits.

Plus, how many people are required to manage 50 servers? If youve got 50 1U (1.75-inch) boxes—just boxes that you could buy running Linux and BEA and an instance of an application youve authored—there are bodies associated with that. Theres a huge human factor associated with that.

Power and space. Our 96-way only consumes 700 watts of power. Our 384-way only consumer 2½ kilowatts of power. So in a standard 42U (73.5-inch) rack, we can put enormous power in a very small footprint. In New York, where youre charged 18 cents per kilowatt hour, or London, where youre charged 25 cents per kilowatt hour, this gear pays for itself in what youre saving in power relative to comparable capacity in the old building-block approach.

I find it amazing to watch the traditional server vendors talk about power savings when you look at what network-attached processing relative to existing servers. Were talking multiple orders of magnitude here. Not only pure power savings, but also in real estate. Our 384-way is in an 11U [19.25-inch] form factor, a little bit bigger than a bread box. And our 96-way is in a 5U [8.75-inch form factor]. So when you start looking at the ecosystem costs, the human costs, the power and space, you start to get the sense of how overwhelming this value proposition is.

/zimages/2/28571.gifClick here to read about Intels vision for servers.

But what really takes the argument off the table concerning the old way of doing things is that fact that you eliminate the need to capacity plan at the individual application level. If youre a bank, and you have 1,200 applications in your bank, every one of those applications requires capacity planning. How much power are you going to need next Thursday at 4 oclock? And everybody over-provisions, because the one thing IT is never going to do is under-provision. Youre always going to have overage there, so youre always going to have underutilization rates. Youre going to have the 8, 9, 10, 11 percent utilization rates. Its just the way things are right now, but things need to change. People need data centers to be more profitable. They need to start seeing 50, 60, 70 percent utilization of their server infrastructure, and yet have capacity available at a micro-second type granularity to address the unpredictable nature of compute. Thats what network-attached processing does. Just as network-attached storage solved that in the storage world, and networking in general solved that in the networking world, we solve it in the world of compute.

Next Page: Technology advances will address the latency issues.