Suns Papadopoulos: Integrated Infrastructure Is the Future

Q&A: Sun CTO Greg Papadopoulos believes the company's Project Blackbox is the way the industry is moving as it tries to address customer concerns over power, cooling and data center real estate.

Sun Microsystems on Oct. 17 will unveil its Project Blackbox, an initiative designed to address such issues as power, cooling and infrastructure deployment for companies in such areas as Web 2.0 and high-performance computing.

The plan calls for delivering all the technology traditionally found in a 10,000-square-foot data center—from servers to storage to software—pre-integrated and ready to roll inside a standard shipping container.

Basically, a customer orders what they want, Sun builds it inside a container and within a few weeks the container is delivered to the customers site. The user simply plugs in the power, networking and chilled water and its ready to go. Greg Papadopoulos, Suns executive vice president and chief technology officer, spoke with eWEEK Senior Editor Jeffrey Burt about the concept.

What was the inspiration behind the Blackbox Project?

Looking at the fact that everybody who is using computing today are actually sort of custom-building larger systems. And most of the computer business is giving people the piece parts. Its like we gave people power generators and said, "Go build a power plant." And we thought, "Maybe we should go look at what engineering it would do at this level."

So we sort of looked at the holistic problem, that computing is not just the server or storage or networking gear, but how those fit together. And then how they are powered, and how theyre cooled and whats the facility for them. We wanted to engineer that.

So this is, how do you go after very high-scaled deployments that need to be exceptionally efficient, low-cost, ecologically responsible, and then basically challenging the assumption that youve had for so many years in computing that people and machines live together.

It goes back to the operator who used to hang tapes and change chad out. So we designed spaces that could handle both people and machines, and that was a happy thing to do for a long time until things like power and cooling and a whole bunch of other requirements became so excessive that its actually massively inefficient and time consuming to go design data centers now.

So was the drive behind this because of power and cooling, was it because of space constraints, was it because of the need for data centers to be more flexible and more dynamic?

Yes, yes, yes. All of that. The reality of how the idea got started was I was visiting Danny [Hillis, co-chairman and CTO at Applied Minds] in Burbank [Calif.], and we used to work together designing supercomputers, and we were talking about the trend toward smaller, faster servers … and whats the smallest, densest thing you could make and in the typical contrarian style, with Danny it was, well, whats the biggest one you can make? And then you say, "Well, you know, if you make it any bigger than a shipping container, you cant move it around easily. If you make it exactly a shipping container, then you get this whole, interesting worldwide infrastructure. OK, so then lets use that as a design point. Thats going to be the size."

Then we went through the capture of, so what really goes on and whats important as people are building out grids and things, we weve been building out our own grid, so we were taking a lot of that learning and incorporating that into the design.

Youve got a number of patents pending on this, including two of them for the cooling technology. Can you talk about the cooling system in this and how it works?

Its one of those things that we looked at the cooling for a long time. One thing you could think of is, well, lets put kind of a raised floor there and do that kind of thing, up-down, sideways airflow, and then this very simple design came out that says, "Oh, lets put all of the racks from to back in a circle inside the container—a ring around the outside—and then just circulate the air through each server and just keep circulating it.

Of course, you do that for more than a few seconds, and youll have a blast furnace. So you interpose heat exchangers in between each rack—the exit of [air from] one rack is cooled down and is directly the cool air for the next rack. There are no other things getting in the way. In fact, it forms this kind of perfect cyclonic flow inside the box, and its very quiet, its very efficient, it keeps all the air contained within the units so that a lot of things like fire suppression and other things get a lot easier. It lets you have it in atmospheres that outside of the container are not necessarily clean. You dont have to worry about that. Its just for uses inside. Its a really elegant breakthrough.

Next Page: Primary users.