Rethinking Compute Power in the Data Center - Page 4

the latency issues">

What about the issue of latency? If youre taking the workload off the server, sending it the Compute Appliance, crunching the numbers and sending it back, arent you adding latency into the equation?

Were another hop in the wire, so obviously we introduce wire-level latency between the host and us. In a zero-loaded world, we add a couple of microseconds to the process, but nobody cares about a zero-loaded environment. What people care about is a loaded environment, and in a loaded environment, we effectively eliminate latency.

This is the first 21st century computing platform. This is a computing platform birthed in the 21st century, created in the 21st century, built in the 21st century, so were not going back to the challenges of the past. Our underlying processor architecture is perfectly suited for virtual machine workloads, so we dont end up in the sort of queuing penalty box that exists in virtually every other server thats ever been built. Not only do we have enormous throughput, which solves a lot of the latency issues in general, but we also bring so much power to bear that, in a loaded environment, latency doesnt become an issue.

Were actually starting to see—and I think this is going to become pervasive over the next decade—in certain metro areas like London and most certainly will be in New York the service provider, whether its the big telco or whatever the situation is in a geography, has so much fiber control over the infrastructure there, that they are only a handful of microseconds away from the data centers of their customers, they are actually truly able to vend processing power to host systems that are located on a customers premise with virtually no latency overhead. … Its not all there today. It wont be in the next three or four years in mass markets, but within the next 10 years, with the work that were doing and the evolution of network-attached processing, what theyre doing in terms of their fiber infrastructure, what the networking vendors are doing, I think what youre going to see is completely different compute model. The legacy model is in the twilight of its historical relevance.

/zimages/2/28571.gifClick here to read how an industry group is approaching security issues in grid computing.

Right now this technology is targeted at J2EE workloads. Will we be seeing support for .Net?

Absolutely. And the amazing thing is its the same infrastructure. No changes to the hardware, no changes to the microprocessor, which nobodys ever done before, and that goes to the agnostic nature of network-attached processing. Were not in the OS game. We dont do what IBM and [Hewlett-Packard Co.] and Sun [Microsystems Inc.] and Dell do, as far as thats concerned. Were focused on delivering processing cycles.

The engineering challenge thats in front of us for .Net support is doing the same sort of segmented virtual machine work that we pioneered in the world of Java to the world of CLR [Common Language Runtime]. Were in discussions with Microsoft on that and we hope to announce a formal plan of record in the weeks ahead.

Looking forward, in what other directions are you hoping to take Azul?

Theres a couple of things. If you go back to the foundation of the company, thats to map the architecture to the way that people build applications today. The shared compute pool model is applicable in other areas of processing as well. Take SSL [Secure Socket Layer] for example. I think enterprises would SSL everything if they could capacity plan it, effectively delivering SSL. But they dont because of the challenge thats associated with it. Those are big pools of compute that can be delivered. XML, etc. So this whole concept of delivering big pools of processing power has extensibility into a number of other processing areas, and were looking at that.

Another area of intense interest to us is the role of the network between the tiers. If you subscribe to the vision that networking and storage and the various forms of processing power are ultimately services, not server-bound, then that requires a level of homogenization between network infrastructure and the infrastructure that delivers these services. So that is an area that we are investing heavily in.

/zimages/2/28571.gifCheck out eWEEK.coms for the latest utility computing news, reviews and analysis.