Microsoft Azure Harnesses Nvidia Tesla V100 GPUs for AI, HPC Workloads

New NCv3 series Azure virtual machines will use Tesla V100 GPUs from Nvidia to accelerate cloud-based AI workloads.

Microsoft Azure

Microsoft is outfitting its cloud data centers with new, high-performance GPUs from Nvidia, enabling customers to get a little more ambitious with their artificial intelligence and high-performance computing initiatives.

GPUs are all the rage among cloud companies looking to accommodate the AI workloads of businesses that are using machine learning and other techniques to power their intelligent applications and services. Amazon Web Services, Google and IBM all offer GPU-based cloud instances.

Microsoft, too, offers a range of GPU cloud computing choices, each with its own price, performance and cost characteristics. Soon, customers will have one more high-performance option to weigh, according to Corey Sanders, director of Compute at Microsoft Azure.

In the next few weeks, the company is kicking off a beta of NCv3, an Azure virtual machine (VM) series based on Tesla V100 hardware from GPU maker Nvidia.

"The NCv3-series virtual machines will use Nvidia Tesla V100 GPUs, which are the latest GPUs from Nvidia. Like our previous GPU sizes, Azure is the only cloud with dedicated InfiniBand interconnects to enable incredibly fast multi-VM computations," wrote Sanders in a Nov. 13 announcement. "Our GPU sizes also offer PCIe configuration with direct support for Azure premium storage."

Debuting at the GPU Technology Conference in May, Nvidia's data center-class Tesla V100 GPUs are based on the company's Volta GPU architecture. The seventh-generation GPU chip architecture packs 21 billion transistors and can deliver the equivalent performance of 100 traditional CPUs when trained on deep learning workloads.

Preview access to NCv3 series instances will be available first in the East U.S. Azure region in Virginia. Microsoft is accepting sign-ups here.

Of course, Microsoft isn't the only provider of Nvidia Tesla V100-based cloud instances. On Oct. 25, Amazon unveiled new EC2 instances with up to eight GPUs, 128GB of GPU memory, 64 virtual CPUs and 488GB of main memory.

Nvidia Volta GPUs are gaining ground in both the public cloud and server markets.

Alibaba, Amazon Web Services, Baidu, Oracle, Tencent and, of course, Microsoft have all announced Volta-based cloud offerings. Among server makers, Dell EMC, Hewlett Packard Enterprise (HPE), Huawei, IBM and Lenovo have all embraced the technology.

Meanwhile, two other GPU-based Azure virtual machine series will soon be shedding their preview status and accepting production workloads, Sanders said. On Dec. 1, Azure NCv2 and ND series virtual machines will be generally available in the United States, Europe and Asia.

NCv2 virtual machines use up to four Nvidia Tesla P100 GPUs and InfiniBand, a low-latency networking technology, to accelerate high-performance computing (HPC) workloads. The ND series uses up to four Nvidia Tesla P40 GPUs and 24GB of GPU memory to train AI models and perform deep learning.

Users of Microsoft's preconfigured Data Science Virtual Machine, a cloud-based big data analytics offering, won't be left behind, assured Sanders. The company is updating the service's images, so that users can capitalize on the improved performance the new GPUs provide.

Pedro Hernandez

Pedro Hernandez

Pedro Hernandez is a contributor to eWEEK and the IT Business Edge Network, the network for technology professionals. Previously, he served as a managing editor for the network of...