Microsoft Puts FPGAs to Work for Azure and Bing

Both Microsoft's cloud and its search engine are using field-programmable gate array technology to speed up workloads.


Project Catapult, a field-programmable gate array (FPGA) project established in 2011 by Microsoft, is starting to pay off for users of the company's services, according the Redmond, Wash., software giant.

Microsoft today shared a behind-the-scenes look at how FPGA technology is transforming its Bing search engine and helping to accelerate Azure workloads. FPGAs, as the term suggests, are chips that can be reconfigured, or "reprogrammed," through software after manufacture, enabling customers to accelerate select workloads without incurring the processing overhead of additional software.

Cloud providers, particularly those on the hyperscale end of the spectrum, have a keen interest in FPGAs. Beset by unpredictable and rapidly changing workloads, cloud companies can employ FPGAs to help them adapt faster and more readily to the demands of the market.

In Microsoft's case, the company is using FPGAs in an effort to build an artificial intelligence (AI) supercomputer in the cloud. Users will soon be able to judge those efforts for themselves by seeking answers on Bing, Microsoft's search engine.

Currently, Bing is using Catapult technology to generate faster, more accurate results. "By the end of 2016, an artificial intelligence technique called deep neural networks will be deployed on Catapult to help Bing improve its search results," blogged Microsoft Research writer Allison Linn on Oct. 17. "This AI supercomputer in the cloud will increase the speed and efficiency of Microsoft's data centers—and anyone who uses Bing should notice the difference, too."

On the Azure cloud computing platform, FPGAs are being tapped to help Microsoft increase the speed and efficiency of its cloud servers with less hardware. Additionally, FPGAs are being used to speed up the networks of the company's data centers.

"To make data flow faster, [the Catapult team] inserted an FPGA between the network and the servers," Linn added. "That can be used to manage traffic going back and forth between the network and server, to communicate directly to other FPGAs or servers or to speed up computation on the local server."

Intel, better known for minting its fortunes with general-purpose processors, is also embracing FPGAs.

In April, the company announced it had begun shipping Xeon server chips with FPGA accelerator technology. The multichip platform is pairing 14-nanometer Xeon E5-2600 v4 processors (Broadwell) with the Arria10 FPGAs from Altera.

Intel acquired Altera, a San Jose, Calif.-based provider of FPGA technology, late last year. The deal, worth an estimated $16.7 billion, was Intel's largest-ever acquisition and set the stage for a new business unit within Intel called the Programmable Solutions Group (PSG).

"Altera is now part of Intel, and together we will make the next generation of semiconductors not only better but able to do more,” said Intel CEO Brian Krzanich in a Dec. 28 announcement. "We will apply Moore's Law to grow today's FPGA business, and we'll invent new products that make amazing experiences of the future possible—experiences like autonomous driving and machine learning."

Pedro Hernandez

Pedro Hernandez

Pedro Hernandez is a contributor to eWEEK and the IT Business Edge Network, the network for technology professionals. Previously, he served as a managing editor for the network of...