Rethinking Compute Power in the Data Center - Page 2

with a disruptive technology">

Analysts have called Azuls technology fairly disruptive. What is Azul doing to convince users to try it out?

This is probably the most disruptive technology thats come across the compute landscape certainly in my professional career, which dates back to the mid-80s. The most important thing right now is going out there and testing. It was just about a year ago … that we broke silence on all of this, and over the course of the last year, we have worked very closely with key partners like IBM and BEA, JBoss [Inc.], Oracle, the key J2EE-class vendors. All of them have seen our gear, all of them have tested our gear. We just announced certification with BEA, so weve had … the J2EE community—which stands to benefit enormously from this—beating on our gear for a long, long time.

Weve also had a number of key integrator partners, like an EDS [Electronic Data Systems Corp.], companies that know a heck of a lot about data centers, know a heck of a lot about provisioning, the state of the utility offerings from the traditional systems vendors, beating on our gear. Then, as we got closer to our first customer ship and general availability, we have been bringing our gear into some of the most complex data centers around the globe, not just here in the U.S. These are customers in the financial services, the Wall Street types, the big global logistics companies, the big telecommunications companies both here and abroad, Internet properties, [and] high transactional businesses. … These kind of customers who are conservative by nature but feel the pain of trying to scale applications in a highly unpredictable world and dont want to continue down the road of an inefficient model of horizontal or vertical scaling are early adopters.

The only way you gain confidence in any new technology is prove it, prove it, prove it, prove it. Youre going to see a number of benchmark results coming out from Azul over the next few weeks and months. … Were very pleased in the performance weve been able to deliver.

/zimages/2/28571.gifClick here to read an editorial suggesting that grid computing might not work in the enterprise.

You released the Compute Appliance in April. How many of these are in actual production, and how many are still in test mode?

A little bit of both. I would say the majority are in pre-production and there are probably a handful of customers that are in production.

One of the elegant elements of this technology is the way customers can deploy this. You cant bring any new technology into a big data center like the enterprise Fortune 500-class data centers. That requires tremendous change. You have to bring something that can be transparently implemented. Network-attached processing, much like network-attached storage, can be mounted into your existing environment. What were seeing in the early production uses of it is theyre mounting the compute pools behind existing clusters as buffer, and as extra capacity.

Think about it. If you have an application that has four instances across a four-node cluster and youre seeing wild peaks and valleys to compute usages, as opposed to adding more, more and more blades or two-ways, more four-ways, that continue to show the inefficiencies of that model, people will just take that cluster and have it back-ended to the compute pool.

That gives them the opportunity to see the host reduction factor that we enable. It also lets them gauge performance. It also allows them to understand any sort of networking traffic, latency or I/O issues that exist. It gives people a real-world opportunity to see this. As they gain confidence, then they can start consolidating their data centers.

Customers are very anxious about the profitability of their data centers, and if you look at the period between 1995 and 2005, there has been such enormous build-out. In these last 10 years, data centers have absolutely exploded, to the point where youre across from [executives with] the flagship-branded telecommunications companies and banks, and they tell you, were out of space and, more importantly, for the amount of money that weve invested in these data centers, relative to the applications that we spin out of these data centers—whether theyre applications that run the business or generate revenue streams or run trading floors—that the profitability—what theyve invested vs. the return that theyre getting—has hit an all-time low. What theyre looking for are technologies that they can get to know, that they can embrace, that they can [deploy] inside their environment that dramatically reduce the cost structure of their data centers. Taking 10 percent out means absolutely nothing to them. Theyre looking for things to change. So as we come in with this vision and this story of network-attached processing and we say, "If you look over the 10 years that are to come, 2005 to 2015, our vision of computing is that small denominations—the two-ways, the four-ways, the eight-ways, etc.—becomes an irrelevant metric."

By 2015, our vision is that applications, whether they be small, big, mission-critical, back-office, are able to tap into an enormous bucket of processing power thats built for that workload. Instead of buying servers, youre buying the service of processing. Same with storage and same with networking now, people dont buy in small denominations. They buy into a fabric that they can tap into and share.

[With the rise of service-oriented architectures], that makes the need for this model even more critical because now youre not trying to capacity plan around applications, youre trying to capacity plan around services. Thats really hard to do, because its really difficult for an IT manager with 50 different applications from all these different business units to be able to capacity plan at the individual application level, much less trying to capacity plan around some service or identity service. So this sort of shared fabric—eliminating the pain of peaks and valleys—has huge ramifications in terms of data center profitability, and thats what customers are resonating with.

Since our technology doesnt require any religion—theres no [operating system] religion, we dont care if youre a Linux shop or a Solaris shop or Windows shop, we have no binary overhead, we dont carry instruction set baggage like you do in x86 or any of the other microprocessor architectures—were big mountable power that can be injected or ejected within seconds and by doing so you eliminate capacity planning around compute.

Next Page: Reduced power consumption is critical for ROI.