Google’s cloud customers now have a new option for using GPUs to run compute-intensive applications without having to pay full price for the extra performance.
Starting this week, Google is making its preemptible pricing model generally available for GPUs attached to preemptible virtual machines (VMs) on Google’s cloud.
The preemptible option allows companies to get certain Google cloud computing resources for substantially lower prices than on-demand computing pricing models on the understanding that Google can take over—or preempt—the resources at any time and with very little notice.
The company has positioned the pricing model as ideal for companies that need only temporary or infrequent access to high-performance computing resources—such as GPUs—and are unable or unwilling to pay the premium prices associated with committed usage models.
With Google’s preemptible GPUs now becoming generally available, enterprises can get access to GPU-enhanced performance for up to 70 percent lower prices compared with GPUs attached to on-demand virtual machines, Google Product Manager Chris Kleban wrote in a blog on June 11.
The prices are substantially lower than the 50 percent price benefit that Google had said enterprises could expect when the company initially announced the preemptible GPU option in January this year.
At that time, Google had said enterprises could attach Nvidia’s K80 and P100 GPUs to preemptible VMs for $0.22 and $0.73 per GPU hour, respectively. With this week’s announcement, those prices have been reduced further to $0.135 and $0.43 per GPU hour, respectively. In comparison, the cost for attaching the same GPUs to on-demand VMs is $0.45 and $1.46 per GPU hour, respectively.
Preemptible pricing for the highest-end Nvidia Tesla V100 GPU is now $0.74 per GPU hour, compared with the $2.48 per hour that enterprises would otherwise pay for the same GPU on an on-demand VM.
“Preemptible GPUs are ideal for customers with short-lived, fault-tolerant and batch workloads such as machine learning (ML) and high-performance computing (HPC),” Kleban wrote.
The option gives enterprises a way to access large-scale GPU infrastructure at predictably low pricing and without having to worry about bidding for compute capacity, he noted. From a performance standpoint, preemptible GPUs perform exactly the same as equivalent on-demand GPUs. The big difference is that Google can shut them down with a 30-second notice. Organizations can also use preemptible GPUs for a maximum of 24 hours at a stretch.
For billing purposes, Google will consider any GPU that is attached to a preemptible VM as a preemptible GPU, Kleban said.
GPUs offer enterprises a way to accelerate hardware performance. With preemptible GPUs, users and entities that can benefit from such hardware acceleration now have a way to access the performance relatively affordably. In addition to enterprises, researchers and those in academia can also benefit from the lower cost associated with preemptible GPUs, the Google product manager said.