Interview: Stephen DeWitt, president and CEO of Azul Systems, discusses the "network-attached processing" that his company's Compute Appliances deliver.
Azul Systems Inc., in Mountain View, Calif., wants to do with processing power what others have done in storage and networkingcreate an environment where servers can access a ready pool of compute power when needed. The company in April unveiled the first generation of its Compute Appliance products, massive machines with up to 384 processors that speed Java performance and can help enterprises looking to consolidate their data centers and reduce IT costs by doing the work of numerous low-end servers. President and CEO Stephen DeWitt recently spoke with eWEEK Senior Editor Jeffrey Burt about Azuls data center philosophy, its products and its future.
What is network-attached processing?
One of the reasons we call it "network-attached processing" is very much a play off of "network-attached storage," because I think that is something that people can look back to and gauge its impact inside their environment over the last decade.
Network-attached processing, in its most simple terms, is the ability for existing server infrastructure, whether it be Intel [Corp.]-based servers or Unix-based servers, unmodified, to mount an external pool of computing power. That external pool of computing power was built from the ground up for the way that people build applications today, using virtual-machine environments like Java, J2EE [Java 2 Enterprise Edition], .Net, etc. So as existing infrastructure mounts this pool, it takes advantage of a class of infrastructure thats highly optimized for those workloads. Probably the biggest end-user benefit in mounting external processing power is the ability to eliminate the need to capacity-plan at the individual application level.
Just as your notebook is able to mount terabytes of external storage, a two-way Xeon box can mount a compute pool and literally have the processing power of the largest systems in the market transparently without the customer having to do anything.
Click here to read more about Azuls appliances.Youve said that you want to do with compute power what other companies have done with storage and networking.
In the world of compute, things have to change. Weve had pretty much the same computing model since the mid-60s. What this means for the end user is as they size their infrastructure for the applications that theyre deploying, they size that infrastructure on one of two axesthey either horizontally scale server infrastructure by clustering a whole bunch of small-denomination compute bricks, or they buy big iron. I think the industry as a whole knows the good, the bad and the ugly associated with both horizontal and vertical scale.
We have an opportunity right now to eliminate a lot of that architectural inefficiency, and if you accept as a given that the world is moving to virtual-machine environmentsand thats a pretty safe assumption, given that thats the strategy of just about everybody, Microsoft [Corp.], IBM, Oracle [Corp.], BEA [Systems Inc.], SAP [AG], etc.then the concept weve pioneered is very viable.
Just as NFS [network file systems] open standards-based protocols allowed us to mount transparently external storage, the world of virtual machines allows us the opportunity to separate the function of compute from the computer, and by doing so allows existing infrastructure to mount this big shared pool. We talk so much about utility computingI dont think theres been anything in the last "n" number of years in the computing world thats been hyped as much as utility computing. But if you really do believe that the function of compute can be a utilityand we very much subscribe to thatthen theres some very fundamental things that have to change. It starts at the underlying architecture, it starts in the way that processing power is delivered, [and] it also involves the economics of processing. Its not just how, its also how much, and we think thats real key.
Next Page: Azul gets customers onboard with a disruptive technology.