Intel and SGI are using a supercomputer to test a data center cooling technology that calls for the system to be fully submerged in a liquid developed by 3M.
Officials with the three companies say that the cooling method could save as much as 95 percent in the cost of cooling the data center and significantly reduce the floor space needed for the infrastructure.
Keeping the data center cool enough to avoid problems caused by heat has been an issue for organizations for years. Heat can cause a range of headaches for IT administrators, from huge power bills and system failures to shorter component life and environmental concerns. System and component makers have been working to make their products more energy-efficient, and data center administrators use a variety of methods—like hot/cold aisles, massive air conditioning units and outside air—to keep the facilities cool.
Using liquid to keep the systems cooler has been experimented with for years, from IBM’s Rear Door Heat Exchanger technology, which can be installed on server rack doors and uses chilled water, to other immersive liquids. Fujitsu last year introduced a new cooling technology called Liquid Loop Cooling, which uses a combination of liquid and air. Intel two years ago worked with a company called Green Revolution Cooling, whose CarnoJet System keeps systems cool by submerging them in a dielectric fluid called GreenDEF. The liquid is a blend of white mineral oil that company officials say has 1,200 times the heat capacity by volume than air. They said the process can reduce power use related to cooling by as much as 95 percent and the total power consumption of the data center by half.
The need for more efficient cooling systems will only grow as trends like cloud computing, virtualization and the Internet of things promise to increase the demand on data centers for more compute, storage and networking capabilities.
In the proof-of-concept run by Intel, SGI and 3M, the vendors submerged an SGI ICE X—the latest generation of the OEM’s distributed memory supercomputer, which runs on Intel’s Xeon E5-2600 server chips—into 3M’s Novec Engineered Fluid, another dielectric fluid. 3M engineers have developed a two-phase immersion process that they said will reduce cooling energy costs by 95 percent.
It also will mean that organizations will no longer have to rely on municipal water supplies for evaporative cooling—greatly reducing the water consumption in the facility—and that the head taken from the system can be reused for heating systems and other tasks, such as the desalination of sea water, according to 3M officials.
The system works by submerging the system into the Novec fluid. The heat from the system is pulled away by the fluid, and the vapor is released and then condenses, falling back into the bath as liquid.
Because of the tremendous gains in energy efficiency, organizations will be able to more tightly pack components and do more computing in less space. According to 3M, businesses will be able to do the same amount of computing in a 10th of the space used by traditional air-cooled data centers. Using the Novec cooling system, a data center can support 100 kilowatts of computing power in a square meter. By contrast, traditional air-cooled facilities can support up to 10 kilowatts per square meter.
“As the backbone of the data economy, modern data centers must increase the raw performance they deliver, but also do so efficiently by containing power consumption and operating costs,” Charles Wuishpard, vice president of Intel’s Data Center Group and general manager of workstation and high performance computing at the company, said in a statement.
SGI officials noted that their ICE X supercomputer can scale from tens of teraflops of performance to tens of petaflops, and allows for tighter packaging of components and easy scalability. Having more efficient and less costly ways of managing data center and system cooling will help ICE X customers better leverage capabilities of the supercomputer, they said.
According to the vendors, more in-depth evaluation of the Novec system installation will begin this month. In addition, they also are working with the Naval Research Laboratory, Lawrence Berkeley National Labs and APC by Schneider Electric to evaluate an identical system, with the hope of showing that it can be used in data centers of any scale.