Mellanox Technologies is partnering with Iceotope to offer liquid-cooled InfiniBand and Ethernet network interconnects designed for the high-performance computing space.
The companies are combining Mellanox’s high-performance network expertise with Iceotope’s liquid-cooling technology to offer 36-port FDR (fourteen data rate) 56Gb/s InfiniBand and 40/56Gb Ethernet switches that can be housed in Iceotope’s PetaGen cabinet. Iceotope officials said they plan to offer support for EDR (Enhanced Data Rate) 100Gb/s InfiniBand in the future.
Iceotope offers a liquid-cooling system that includes compute and networking blades that are each sealed and self-contained systems that are immersed in what officials call the primary coolant—a “sophisticated, non-conducive, engineered fluid.” The liquid rapidly moves the heat away from the blades to a second coolant in the cabinet.
The cabinet coolant offers twice the heat capacity of mineral oil and half the viscosity, and is circulated via highly energy-efficient pumps, according to company officials. The second coolant captures the heat coming out of the system, and the heat can then be reused for helping to heat buildings. Iceotope offers two flavors of its PetaGen cabinets, the Peta4C cabinet (which holds up to 36 PetaGen blades) and the Peta8C (with up to 72 PetaGen blades).
“These liquid-cooled switches represent a huge step towards entirely fan-less HPC [high-performance computing],” Iceotope founder and Chief Visionary Officer Peter Hopton said in a statement. “The HPC industry is embracing liquid cooling at a remarkable rate. It’s just a case of technologies being available to match demand. Until now, it was widely accepted that the interconnect switch would never be liquid cooled. It’s great to be able to say that, thanks to our work with Mellanox, that’s no longer the case.”
Iceotope aims its products at HPC and supercomputing environments.
Liquid cooling is not new in the data center. IBM and other OEMs have offered such technology for several years, with recent efforts aimed at scale-out environments, where driving down power and cooling costs is increasingly important. Hewlett-Packard last year unveiled the Apollo line of supercomputers for HPC, including the 8000, which includes liquid-cooling capabilities. HP officials at the time said that liquid is 1,000 times more efficient at cooling systems than air, but that design challenges and concerns about water getting close to electronics have slowed broad adoption of liquid-cooling technologies.
Fujitsu, teaming with liquid-cooling technology vendor Asetek, unveiled the Cool-Central Liquid Cooling Solutions, which include the Primergy CX400 M1 server and its cluster nodes. Fujitsu officials said the new liquid-cooled server solution will reduce cooling costs by 50 percent and increase data center density by up to five times for such environments as HPC.
For Mellanox, the partnership with Iceotope gives it another networking option it can offer HPC customers, according to Gilad Shainer, vice president of marketing at the company.
“Liquid cooling is an extremely exciting proposition in the HPC market,” Shainer said in a statement, adding that the vendor’s work with Iceotope will mean giving users another option for energy efficiency in their data centers “with high quality liquid-cooled alternatives to air.”