Sun, of Santa Clara, Calif., already offered its Grid Rack to customers, where racks populated with technology ordered by the customer were put together by Sun and then delivered to the users site.
The vendor was looking to transfer that capability to an entire data center setup, and the largest size that made sense was a standard shipping container, Papadopoulos said.
A key to Sun being able to put the technology into such a compact space is the ability to use water to cool the systems, Gadre said.
Water is more efficient than air, which is the method most widely used in traditional data centers, he said.
Inside the shipping containers, the systems are set up front-to-back along the wall of the container, with heat exchangers between each one, Papadopoulos said.
The warm air from one is passed through exchanger, where its chilled and then used to cool the next server, he said.
"It forms this kind of perfect cyclonic flow inside the box, and its very quiet, its very efficient," Papadopoulos said.
Charles King, an analyst with Pund-IT Research, said the concept addresses a lot of concerns that businesses have, but that Sun is going to need to answer some key questions on issues such as security before the shipping container business takes off.
"Its an interesting idea because it addresses a lot of the challenges that people have concerning data center facility costs, in particular the real estate component," said King, in Hayward, Calif.
"The whole cost issues around data centers have little to do with the technology and everything to do with the support and construction of the facility."
Being able to run multiple containers together—even stacking them—would help address those issues, he said.
However, most data centers have several layers of security, and at a time when disaster recovery and compliance are key issues, having a data center thats housed inside a shipping container might not be enough security for many enterprises, King said.
Gadre admitted that the Blackbox idea wont be for everyone, including some who might want the highest levels of security. But in the areas of Web serving and HPC, it should find customers, he said.
The idea of integrating cooling, networking and power distribution in a central fashion with the hardware is getting looks from a number of OEMs.
Hewlett-Packard, of Palo Alto, Calif., with its Lights Out Project, is looking to do something similar on a smaller scale, looking at an infrastructure model that brings power and cooling closer to the compute nodes themselves.
The goal is similar: to create an environment that addresses power and cooling concerns while increasing flexibility inside the facility.
Papadopoulos said this is a trend in the industry that is going to grow in importance.
"I think there is a huge pent-up demand for somebody to figure this out," he said.
Power and cooling have become key issues in data centers as system form factors have become more dense, particularly with the rise of blade computing.
One of the key promises of blades—being able to pack more compute power into smaller areas—is hindered by the amount of power consumed and heat generated.
Major technology consumers like Google predict that soon they will be paying more to power and cool the systems than for buying the machines themselves.
IT vendors are addressing these issues in a number of ways. Chip makers like Advanced Micro Devices and Intel are producing more energy efficient processors; OEMs are building systems with power consumption in mind; and software makers are putting power monitoring and managing functions into their products.
Virtualization—the ability to run multiple operating systems and applications on single physical machines—also is an important technology.
Sun has been vocal on these issues. The company is promoting its UltraSPARC T1 "Niagara" chip, which offers up to eight processing cores while consuming less power—about 70 watts—than many other processors.
The company also is using AMDs Opteron in their x86 servers.