If Facebook officials have their way, their Open Compute Project will go beyond servers and power supplies, touching on every aspect of a data center’s infrastructure.
The initiative, which kicked off in April when Facebook open-sourced the server and data center specifications the social networking giant employed in building its data center in Prineville, Ore., now offers an impressive array of members, from Intel, Asus and Rackspace to Mellanox, Huawei and Red Hat, not to mention a few research and education institutions.
It’s an indication of the various directions in which the project is rapidly moving, Amir Michael, hardware design manager at Facebook, said in an interview with eWEEK during the recently concluded SC 11 supercomputing show in Seattle. Facebook already is moving forward with the next generation of the custom servers it’s designed, Michael said.
At the same time, project members also are looking to tackle other aspects of the data center, including systems management, storage and I/O. The push in these directions will help create the momentum to solve that key issues that Facebook officials saw when looking at data center technology-that proprietary products from large and small vendors alike could address in a broad way some of the mainstream needs that are present in most enterprises, but do not meet the unique demands a particular business may have.
“One of the things we saw as a problem [with server makers] was understanding what a customer’s requirements are,” said Michael, who presented a talk on Facebook’s data center work and the Open Compute Project at the SC 11 show. “So we said, -Well, here it is, these are our requirements.'”
Facebook engineers set out about two years ago to start designing their own servers using standard off-the-shelf technologies. Up to that point, the company has been using systems from traditional OEMs. Facebook worked with chip makers Intel and Advanced Micro Devices, as well as systems makers Hewlett-Packard and Dell, to create the custom servers.
The aim was to build systems that offer the performance needed to run a fast-growing social network with 800 million-plus members while keeping down capital, power and cooling costs in the densely populated data centers. The Facebook-developed systems are 1.5U (2.65 inches) tall-rather than the more traditional 1U (1.5 inches) servers-which, among other positives, makes for better air flow and lower cooling costs, Michael said. There is no paint or logos that are found on servers from OEMs-which not only reduce the capital costs, but also make the systems lighter-there is a more energy-efficient power supply in place and they’re easier to service, with tool-less components, from fans to power supplies.
The Oregon facility also utilizes outside air to keep the systems cool, rather than running expensive chiller units, Michael said.
The result of the work was a 38 percent increase in energy efficiency at the Oregon facility at a lower cost of 24 percent as compared with Facebook’s other data centers, he said. The Oregon data center also has a power usage effectiveness (PUE) ratio of 1.07. The PUE ratio is a way to measure how efficiently a facility uses its energy; the closer to 1.0, the better. The Environmental Protection Agency has a standard PUE rate of 1.5.
Facebook expects to get similar results as it builds new data centers, Michael said. Last month, company executives said they plan to build their next data center in Lulea, Sweden, just on the edge of the Arctic Circle, to serve users in Europe and other regions. The site was chosen for its cold air and access to hydroelectric power.
The company also is working on its next generation of servers, which will include such technologies as an Intelligent Platform Management Interface (IPMI) and the ability to reboot on the LAN. They also will continue to be powered by Intel and AMD chips, though Michael said the company also is keeping an eye on other chips, including those from ARM Holdings. ARM-designed chips from the likes of Nvidia, Qualcomm and Samsung are found in most smartphones, tablets and other mobile devices, but the company also is looking to move up the ladder and into low-power servers.
“We’re always interested in whatever CPU works best,” he said.
Facebook officials also are interested in leveraging what comes out of the various Open Compute Project working groups that will focus on storage, systems management and interconnect technologies, Michael said. The company has never intended to run the project, he said; instead, the hope is that the community will evolve to the point where Facebook is just another participant that can take advantage of the open technologies that come out of it.
Facebook’s decision to open up its hardware specifications in April was a significant change for an industry where other businesses, such as Google and Amazon, have closely guarded their data center specs, using them instead as a competitive advantage. However, Facebook officials saw an open community as the way to faster innovation and more product options.
On Oct. 27, the Open Compute Project announced it was forming a foundation to lead the effort, with directors and advisers coming from such places as Arista Networks, Facebook, Rackspace and Intel, as well as a mission statement and guiding principles. In a blog post on the project’s Website, Frank Frankovsky, director of hardware design and supply chain at Facebook, said he was surprised at the level of enthusiasm for the idea since it was announced in April.
“A great deal of work remains to be done,” Frankovsky said. “We need to continue to grow the community and enable it to take on new challenges. We need to ensure that, as the community evolves, it retains its flat structure and its merit-based approach to evaluating potential projects. And we need to keep the community focused on delivering tangible results. But what began a few short months ago as an audacious idea-what if hardware were open?-is now a fully formed industry initiative, with a clear vision, a strong base to build from and significant momentum. We are officially on our way.”