Azul Targets Compute Power

By eweek  |  Posted 2005-10-03 Print this article Print

Company's new product line eases the burden on servers.

Azul Systems Inc., in Mountain View, Calif., wants to do with processing power what others have done in storage and networking—create an environment where servers can access a ready pool of compute power when needed. The company in April unveiled the first generation of its Compute Appliance products, massive machines with up to 384 processors that speed Java performance and can help enterprises looking to consolidate their data centers and reduce IT costs by doing the work of numerous low-end servers. President and CEO Stephen DeWitt recently spoke with eWEEK Senior Editor Jeffrey Burt about Azuls data center philosophy, its products and its future.

What is network-attached processing?

Network-attached processing, in its most simple terms, is the ability for existing server infrastructure, whether it be Intel [Corp.]-based servers or Unix-based servers, unmodified, to mount an external pool of computing power. Probably the biggest end-user benefit in mounting external processing power is the ability to eliminate the need to capacity plan at the individual application level. Just as your notebook is able to mount terabytes of external storage, a two-way Xeon box can mount a compute pool and literally have the processing power of the largest systems in the market transparently without the customer having to do anything.

Youve said that you want to do with compute power what other companies have done with storage and networking.

We have an opportunity right now to eliminate a lot of that architectural inefficiency [in data centers], and if you accept as a given that the world is moving to virtual machine environments—and thats a pretty safe assumption, given that thats the strategy of just about everybody, Microsoft [Corp.], IBM, Oracle [Corp.], BEA [Systems Inc.], SAP [AG], etc.—then the concept weve pioneered is very viable.

Just as [Network File System] open-standards-based protocols allowed us to mount transparently external storage, the world of virtual machines allows us the opportunity to separate the function of compute from the computer, and, by doing so, allows existing infrastructure to mount this big shared pool.

Analysts have called Azuls technology fairly disruptive. What is Azul doing to persuade users to try it out?

The most important thing right now is going out there and testing. It was just about a year ago ... that we broke silence on all of this, and over the course of the last year, we have worked very closely with key partners like IBM and BEA, JBoss [Inc.], Oracle, the key J2EE [Java 2 Platform, Enterprise Edition]-class vendors. All of them have seen our gear; all of them have tested our gear. The only way you gain confidence in any new technology is prove it, prove it, prove it, prove it.

You mentioned the desire among businesses to get a return on investment. However, your technology is pricey. How do you address the cost issue with customers?

Our 96-way has a list price of $89,000, so if you think about that on a per-processor basis, youre talking about $1,200 per processor. Thats pretty cheap.

Any infrastructure play, whether its storage, networking, database, etc., all eventually boils down to a [total cost of ownership] play. While our capital costs are extremely competitive ... the big win for us is the fact that, first off, customers are seeing significant host-reduction factors. Power and space. [In] a standard 42U [73.5-inch] rack, we can put enormous power in a very small footprint.

But what really takes the argument off the table concerning the old way of doing things is that fact that you eliminate the need to capacity plan at the individual application level.

What about the issue of latency? If youre taking the workload off the server—by sending the work to the Compute Appliance, crunching the numbers and sending it back to the server—arent you adding latency into the equation?

Were another hop in the wire, so obviously we introduce wire-level latency between the host and us. In a zero-loaded world, we add a couple of microseconds to the process, but nobody cares about a zero-loaded environment. What people care about is a loaded environment, and in a loaded environment, we effectively eliminate latency.

Right now this technology is targeted at J2EE workloads. Will we be seeing support for .Net?

Absolutely. The engineering challenge thats in front of us for .Net support is doing the same sort of segmented virtual machine work that we pioneered in the world of Java to the world of CLR [Common Language Runtime]. Were in discussions with Microsoft on that, and we hope to announce a formal plan of record in the weeks ahead.

Looking forward, in what other directions are you hoping to take Azul?

The shared-compute-pool-model is applicable in other areas of processing as well. Take SSL [Secure Sockets Layer], for example. I think enterprises would SSL everything if they could capacity plan it, effectively delivering SSL. But they dont because of the challenge thats associated with it. Those are big pools of compute power that can be delivered: XML, etc. So this whole concept of delivering big pools of processing power has extensibility into a number of other processing areas, and were looking at that.

Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel