Sun Microsystems on Oct. 17 will unveil its Project Blackbox, an initiative designed to address such issues as power, cooling and infrastructure deployment for companies in such areas as Web 2.0 and high-performance computing.
The plan calls for delivering all the technology traditionally found in a 10,000-square-foot data center—from servers to storage to software—pre-integrated and ready to roll inside a standard shipping container.
Basically, a customer orders what they want, Sun builds it inside a container and within a few weeks the container is delivered to the customers site. The user simply plugs in the power, networking and chilled water and its ready to go. Greg Papadopoulos, Suns executive vice president and chief technology officer, spoke with eWEEK Senior Editor Jeffrey Burt about the concept.
What was the inspiration behind the Blackbox Project?
Looking at the fact that everybody who is using computing today are actually sort of custom-building larger systems. And most of the computer business is giving people the piece parts. Its like we gave people power generators and said, “Go build a power plant.” And we thought, “Maybe we should go look at what engineering it would do at this level.”
So we sort of looked at the holistic problem, that computing is not just the server or storage or networking gear, but how those fit together. And then how they are powered, and how theyre cooled and whats the facility for them. We wanted to engineer that.
So this is, how do you go after very high-scaled deployments that need to be exceptionally efficient, low-cost, ecologically responsible, and then basically challenging the assumption that youve had for so many years in computing that people and machines live together.
It goes back to the operator who used to hang tapes and change chad out. So we designed spaces that could handle both people and machines, and that was a happy thing to do for a long time until things like power and cooling and a whole bunch of other requirements became so excessive that its actually massively inefficient and time consuming to go design data centers now.
So was the drive behind this because of power and cooling, was it because of space constraints, was it because of the need for data centers to be more flexible and more dynamic?
Yes, yes, yes. All of that. The reality of how the idea got started was I was visiting Danny [Hillis, co-chairman and CTO at Applied Minds] in Burbank [Calif.], and we used to work together designing supercomputers, and we were talking about the trend toward smaller, faster servers … and whats the smallest, densest thing you could make and in the typical contrarian style, with Danny it was, well, whats the biggest one you can make? And then you say, “Well, you know, if you make it any bigger than a shipping container, you cant move it around easily. If you make it exactly a shipping container, then you get this whole, interesting worldwide infrastructure. OK, so then lets use that as a design point. Thats going to be the size.”
Then we went through the capture of, so what really goes on and whats important as people are building out grids and things, we weve been building out our own grid, so we were taking a lot of that learning and incorporating that into the design.
Youve got a number of patents pending on this, including two of them for the cooling technology. Can you talk about the cooling system in this and how it works?
Its one of those things that we looked at the cooling for a long time. One thing you could think of is, well, lets put kind of a raised floor there and do that kind of thing, up-down, sideways airflow, and then this very simple design came out that says, “Oh, lets put all of the racks from to back in a circle inside the container—a ring around the outside—and then just circulate the air through each server and just keep circulating it.
Of course, you do that for more than a few seconds, and youll have a blast furnace. So you interpose heat exchangers in between each rack—the exit of [air from] one rack is cooled down and is directly the cool air for the next rack. There are no other things getting in the way. In fact, it forms this kind of perfect cyclonic flow inside the box, and its very quiet, its very efficient, it keeps all the air contained within the units so that a lot of things like fire suppression and other things get a lot easier. It lets you have it in atmospheres that outside of the container are not necessarily clean. You dont have to worry about that. Its just for uses inside. Its a really elegant breakthrough.
So when you look at this, who do you envision being the primary users?
There are two classes of users. The sort of Google-dot-next, people who are on this curve of very high-end, growing infrastructure requirements to build things like Web services, software as a service, those kinds of things.
Thats characterized by scale thats important, efficiency is incredibly important—what does this cost to buy, operate, etc.—as well as the reaction time. The ability to go provision these things. Googles spending, what, two years to go put in a data center? That just seems archaic for technology that, as [Sun Chairman] Scott [McNealy] says, has the shelf life of a banana.
At that level, this is designed for really high efficiency. The other part or this is that there is a strong responsibility with this, which is, we will take it back. So at the end, when youre done with it and its no longer the latest technology that you want to power that part of the network, well come and pick it up and responsibly recycle it.
The other class are people who have really intense mobile requirements that just need to be able to reactively site computing somewhere. That can be governments of various flavors, disaster recovery, certainly Web 2.0 kinds of companies that want to move computing more favorably towards where power is the past, where networking [is important]—lets go plug our sites in Europe and Asia and Africa and the like.
Do you envision this as a temporary, stop-gap need for these companies, or is this a more longer-term thing for them?
No, I think this is the way that computing gets done. Its engineered infrastructure. Its like, today what we do is we build all this computing stuff and theres fierce competition among the components [makers].
We talk about industry standards and commoditization and high volume, all this stuff, and yet, at the very end of the game, someone does a fully architected custom view of a raised-floor data center, and all of them are different and you say, “Well, whats that all about?”
Its like youre driving around, and every place you need to park your car has all these great scale economies [so] you have to build a custom garage. Well, we ought to engineer that, too, and get that into mass manufacturing. This is sort of another angle for you, as the mass manufacturing of the data center.
When you look at this, there certainly are advantages to the end user. What are the advantages to Sun operating in this way?
Were fundamentally a systems company. Our model today is we sell systems that comprise hardware—serve and storage—and the software, Solaris, that goes on top of it. Thats our ideal sale. Well sell the components independent—well give away the components independent—so thats all the new modern business models here.
This is the system, the next era in system design and system engineering for us. Its what we do. Think of it this way: we have been building computers that attach to networks and now were building computers from networks and the other thing thats in the middle of this container is a network. These hundreds of servers [and] storage units that are in one of these things typically are interconnected by a high-speed network thats inside that container.
It sounds like—certainly on a much larger scale—something similar to what HP is doing with its Lights Out Project, the idea being to put the hardware together with the power and cooling and networking technology into a self-contained unit. Do you see this as a trend in the industry itself, outside of simply what Sun is doing?
I think there is a huge pent-up demand for somebody to figure this out. I think that step where weve done a pretty radical, out-of-the-box [move], if you will, is [asking], “So, what did you need the data center for in the first place?”
All these other designs [for cooling] are basically, yeah, were going to give a new wrap, or were going to bring chilled water into this rack, or gas exchange, or well design a set of racks that do that. Its always in the context of, install it in your machine room. This is, there is no machine room. Theres a container port.
Unless you talk about how a hundred of these things fit together and what your infrastructure is to support that and how built them and designed them and commissioned them in places different from where you deploy them.
Even lights out, as HP talks about in the data center, people still have to bring it there and put it together. Here, again, because of the magic about this, you can ship these containers anywhere really cheaply around the world, you can put them together where ever you want them put together, and then ship them. You have them on the spot hooked up and running, and thats a very different cycle around not only speed of deployment, but also where you need the skills.
Are you able to do things like copy it exactly? A lot of these folks have patterns of pieces of their data center service or whatever, and theyd like to get that pattern exactly deployed in India, and they dont want anybody messing with it.
How do you see what youre doing in with this project impacting what you offer for a more traditional data center environment?
I think for the time being, were gong to be focused on innovation and driving this design point really hard. We have things for, ok, I have a data center and I want more traditional access to it. Were doing a lot of work in that area, too. Thats not what Project Blackbox is about. But we do have a lot of what Andy Bechtolsheim is putting a lot of energy into.
Those markets dont go away, those are important markets and any big customer of ours is a portfolio in any case. “Heres my core IT stuff, and, no, it doesnt make any sense for me to put my Siebel implementation into this thing.” On the other hand, if I go over to Salesforce.com, theyll go, “Yeah, sure.”
Going back to that core enterprise, well that core enterprise might be a company in the package transportation business, and what they really need is high-performance computing because it solves the traveling salesman or cargo loading problem. Boy, thats going to consume a lot of power and computing power, and how would they go about deploying that? And now heres an opportunity to do that in a much more efficient style. These things will coexist.
I understand that. But if Im sitting there with a traditional data center and Ive got some power and cooling problems, and all of a sudden I learn that Sun has this new cooling design for this Project Blackbox, I may wonder about the chances of seeing that design being available to my data center.
We are doing similar things in the data center. It is an important problem to solve. People kick around numbers like 70 percent of IT shops are out of data center space or power or cooling capacity. Certainly in talking with customers you feel it. I feel it when I talk to so many customers who say, “Im out of space. Help me.”
Heres a real quick way for them to incrementally solve that problem. So it may be that we should be focusing on how do we get the right requirements and engineering into it so that were capturing more and more of the customer requirements and just dont look back at the data center from the point of view of this design point.
So as we said, theres the really efficient high-scale scale-out stuff, there are people who have extreme mobility requirements, like government and those applications, and then this third area that I think were both talking about, which is core IT stuff where the data center just isnt working … right now. What role can this play?
What role will virtualization play in this project?
Virtualization, in the way that we think about it in a context like Blackbox is, its really an essential lubricant. Its that lubricating plate between the physical hardware assets, whether theyre racked or bladed or whatever they are, from the logical demands that are on them in terms of operating system and the software stacks on top of them.
Its a lubricating plate because you can move things around. You can take an image, put it on its own system, maybe put another one next to it that isnt being efficiently utilized, maybe move it somewhere else if you need more horsepower, maybe put a thousand of them together in a grid, you can get something else accomplished. So its really that abstraction layer or lubricating plate between operating systems and hardware and that we get out of the idea that when you deploy an application, its forever bound to the kit on which you deploy it.
Thats really an essential piece in thinking about how you get maximum utility out of a Blackbox design, because you really dont want to care at the detailed level [that] my stack is running on exactly that [server] that is sitting on rack 3, position 2. Thats not the idea. Its a sea of computing stuff.
Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.