Facebook officials have said that the giant social networking company has saved more than $2 billion in data center expenses by using open technologies through the Open Compute Project it founded in 2011. They also have said the effort has helped accelerate the innovation of infrastructure resources, such as servers and networking appliances.
Facebook now wants to drive the benefits of open source into the highly competitive world of artificial intelligence (AI). Company officials on Dec. 10 announced that they are making the plans for Facebook’s massive servers designed for AI freely available by contributing the designs for the “Big Sur” systems to the Open Compute Project (OCP). The company, through its Torch initiative, already has open-sourced some of its deep learning code.
By doing the same with Big Sur—which Facebook reportedly has been working on for 18 months—anyone can take the server blueprints and reproduce the servers for their own use as they look to speed up their own AI software efforts. For Facebook, that could mean faster access to such critical applications as advanced facial and voice recognition, object recognition in photos and videos, and automated texting. It could also drive down the cost of such innovation.
“Often the things we open-source become standards in the community and it makes it easier and cheaper for us to acquire the things later because we put them out there,” Serkan Piantino, director of engineering of Facebook’s AI group, said during a Dec. 10 conference call, according to Bloomberg.
A growing number of major tech players and smaller startups are driving the development of artificial intelligence and deep-learning technologies that will enable their products to learn, think and act more humanlike. They will be crucial to such ventures as self-driving cars, and vendors ranging from Google and IBM to Microsoft, Apple and Baidu are growing their investments in the space. In recent efforts, Apple in October bought AI startup Perceptio and Google had bought a stake in the German Research Center for Artificial Intelligence.
Last month, Google released its second-generation machine-learning technology, called TensorFlow, to the open-source community.
For its part, Facebook over the last four quarters has ramped up its R&D budget to more than $1 billion per quarter, accounting in the most recent financial period for about 28 percent of its revenues. In 2013, the company hired Yann LeCun to head its AI research efforts and last year signed up Vladimir Vapnik, who has expertise in deep learning algorithms.
Facebook officials reportedly said during the conference call that Big Sur is twice as fast as the company’s current data center systems and about 30 percent less expensive than similar systems that are on the market. The AI server also uses GPUs from Nvidia’s Tesla Accelerated Processing Platform, which was released last year as a way to accelerate the training of the neural networks that are at the foundation of artificial intelligence technologies.
According to Nvidia officials, the Tesla platform can reduce by 10 to 20 times the days or weeks that are needed to train neural networks, enabling developers to more quickly innovate and deliver results. Facebook is the first to use Nvidia’s Tesla M40 GPU accelerators—which were released in November—to train neural networks. Each system uses eight GPUs that each consumes 300 watts.
“Deep learning has started a new era in computing,” Ian Buck, vice president of accelerated computing at Nvidia, said in a statement. “Enabled by big data and powerful GPUs, deep learning algorithms can solve problems never possible before. Huge industries from Web services and retail to health care and cars will be revolutionized.”
GPUs offer more processing cores and better power efficiency than traditional chips from Intel and others, making them attractive for machine learning and artificial intelligence. Organizations in the high-performance computing (HPC) space have been using GPU accelerators from Nvidia and Advanced Micro Devices for many years as a way of improving the performance of their systems while keeping power consumption down.
Nvidia officials have targeted machine learning—also called deep learning—as a key growth area for the company, making it a key topic at the company’s GPU Technology Conference earlier this year.