GPU Support Becomes Available for Google Cloud Customers

Nvidia Graphical Processing Units will enable Google Cloud customers to accelerate machine learning and other computationally intensive applications in the cloud.

Google Cloud Platform

Google Cloud customers that require extra computational power for running machine learning and other processor intensive apps can now spin up virtual machines based on Nvidia graphical processing units (GPUs). Starting this week, Google cloud customers in certain regions in the United States, Asia and Europe will be able to attach as many as eight Nvidia GPUs to their custom Google Compute Engine virtual machine. The technology will enable them to accelerate certain applications such as seismic analysis or video and audio transcoding more easily than previously possible.

Other applications that can benefit from the new GPU support include computational finance, molecular modeling, big data analysis, fluid dynamics, visualization and computational chemistry, Google product manager John Barrus announced in a blog this week.

The availability of the new GPUs eliminates the need for organizations to construct their own GPU cluster for powering processor-intensive applications.

“GPUs on Google Compute Engine are attached directly to the VM, providing bare-metal performance,” for organizations that need the extra computing power, Barrus said.

The Nvidia GPUs—each of which come with 12 GB of high-performance memory—are fully integrated with Google’s Cloud Machine Learning managed service for building and running machine learning models.

The processors make it possible for enterprises to train their machine learning models substantially faster, Barrus noted. For instance, companies can reduce development cycles and make speedier changes to their machine learning models by running their application in distributed fashion across multiple GPUs instead of on a single machine.

GPU use will be billed by the minute so enterprises only pay for actual use.  The model, according to Barrus, will allow companies to spin up a large GPU cluster quickly and take advantage of the increased performance without the need for any upfront capital investment.

U.S. customers will pay $0.70 per hour per GPU attached to a virtual machine. Pricing for Google’s cloud customers in Asia and Europe will pay $0.77 per hour per GPU attached to a virtual machine.

Currently, Google only offers GPUs from Nvidia But the company has said it plans to give cloud customers the option of choosing AMD FirePro GPUs as well.

Virtual machines with GPUs can run applications much faster by offloading compute-intensive tasks to the GPU while running the remaining code on traditional CPUs. Nvidia has said its GPU accelerators play a big role in accelerating applications running in a variety of platforms including connected cars and robots.

According to Google, virtual machines with GPUs can achieve tens of teraflops of performance and allow enterprises to complete certain compute tasks in a matter of hours compared to multiple days previously.

Amazon has been offering customers of its AWS cloud services access to GPU accelerated VMs since last September. Like Google, Amazon’s P2 cloud instance type, supports up to 8 Nvidia GPU accelerators and is designed for machine- and deep-learning along with other computationally intensive apps.

Microsoft too has been offering Azure cloud customers the option of running their applications on GPU-accelerated virtual machines since last August. The company also allows customers to pay for GPU use by the minute, as Google does.

Jaikumar Vijayan

Jaikumar Vijayan

Vijayan is an award-winning independent journalist and tech content creation specialist covering data security and privacy, business intelligence, big data and data analytics.