Microsoft Expanding Azure's GPU Processing Options for AI Workloads

Later this year, Microsoft's cloud customers will have new, higher-performance GPU processing options to choose from.

Azure cloud

Microsoft is taking a cue from hardcore PC gamers and upgrading its graphics hardware. But rather than splash eye-catching visuals onto computer monitors, the software giant is accelerating artificial intelligence (AI) workloads on the cloud.

The company announced this week that Azure customers using Microsoft's GPU-assisted virtual machines for their AI applications will have newer, faster-performing options later this year.

Capitalizing on the latest GPU innovations from computer graphics hardware maker Nvidia, Microsoft announced new ND-series Azure virtual machines, promising a big performance boost over the current offerings.

Like Google and other companies using graphical processing units (GPUs) to drive artificial intelligence, Microsoft has enlisted the technology to accelerate machine learning, deep learning and other AI workloads on its cloud. GPUs are particularly suited for these tasks, courtesy of massively parallel microarchitectures that lend themselves to AI applications, which are also typically parallel in nature.

"This new series, powered by Nvidia Tesla P40 GPUs based on the new Pascal Architecture, is excellent for training and inference," said Corey Sanders, director of Compute at Microsoft Azure, in a May 8 announcement. "These instances provide over 2x the performance over the previous generation for FP32 (single precision floating point operations), for AI workloads utilizing CNTK [Microsoft Cognitive Toolkit], TensorFlow, Caffe, and other frameworks."

In addition to improved performance, the new virtual machines offer more headroom for customers with bigger AI ambitions.

"The ND-series also offers a much larger GPU memory size (24GB), enabling customers to fit much larger neural net models," continued Sanders. "Finally, like our NC-series, the ND-series will offer RDMA and InfiniBand connectivity so you can run large-scale training jobs spanning hundreds of GPUs." InfiniBand is a high-throughput, low-latency networking standard favored by high performing computing (HPC) environments.

ND-series virtual machines can also be used to accelerate some non-AI, HPC workloads. Candidates include DNA sequencing, protein analysis and graphics rendering, added Sanders.

The current NC-series portfolio is getting an upgrade. Soon to be known as NCv2, the new offerings are powered by Nvidia Tesla P100 GPUs that have twice the computational performance of their predecessors, claimed Sanders.

Technical specifications on the upcoming ND-series and NCv2-series virtual machines are available in this blog post.

Meanwhile, Microsoft faces some stiffer competition as business demand for cloud-based AI solutions heat up.

In February, Google announced that it was allowing its cloud customers in certain regions to attach Nvidia GPUs to their Google Compute Engine virtual machines. One obvious benefit is that customers no longer have to build or acquire their own GPU clusters and make room for them in their data centers. Another is the substantially shorter time it takes to train machine learning models using the system's distributed approach.

Last fall, Amazon began offering new EC2 (Elastic Compute Cloud) instances with up to 16 Nvidia GPUs. The company also launched a new deep learning AMI (Amazon Machine Image) containing the Caffe, MXNet, TensorFlow, Theano and Torch frameworks.

Pedro Hernandez

Pedro Hernandez

Pedro Hernandez is a contributor to eWEEK and the IT Business Edge Network, the network for technology professionals. Previously, he served as a managing editor for the network of...