Last week virtualization giant VMware held its VMWorld 2019 user conference in San Francisco. The 23,000 or so attendees were treated to notable innovation the virtualization from the host company as well as its many partners. One of the more interesting announcements that I believe flew under the radar was the joint NVIDIA–VMware initiative to bring virtual graphics processing unit technology (vGPU) to VMware’s vSphere and the VMware cloud on Amazon Web Services (AWS).
Go here to see a listing of eWEEK’s Top Predictive Analytics Companies.
Virtual GPUs have been in use for some time but weren’t available to run on virtual servers. Now businesses can run workloads, such as artificial intelligence and machine learning, using GPUs on VMware’s vSphere.
IT needs to step up and own GPU-accelerated servers
Historically, workloads that required GPUs had to run on bare metal servers. This meant each data science team in an organization had to buy its own hardware and incur that cost. Also, because these servers were only used for those GPU-accelerated workloads, they were often procured, deployed and managed outside of IT’s control. Now that AI, machine learning and GPUs are going somewhat mainstream, it’s time for IT to step up and take ownership. The challenge is that IT doesn’t want to take on the task of running dozens or hundreds of bare metal servers.
GPU sharing is the top use case vGPUs
The most obvious use case for vComputeServer is GPU sharing, where multiple virtual machines can share a GPU–similar to what server virtualization did for CPUs. This should enable businesses to accelerate their data science, AI and ML initiatives because GPU-enabled virtual servers can spin up, spin down or migrate like all other workloads. This will drive usage up, increase agility and help companies save money.
This innovation should also lead to companies being able to run GPU-accelerated workloads in hybrid cloud environments. The virtualization capabilities combined with VMware’s vSAN, VeloCloud SD-WAN and NSX network virtualization create a solid foundation for a migration to running virutual GPUs in a true hybrid cloud.
Customers can continue to leverage vCenter
It’s important to understand that vComputeServer works with other VMware software such as vMotion, VMware Cloud and vCenter. The extended VMware support is important because this lets enterprises take GPU workloads into highly containerized environments. Also, VMware’s vCenter has become the de facto standard for data center management. At one time I thought Microsoft might challenge here, but VMware has won this war. Thus it makes sense for NVIDIA to enable its customers to manage the vGPUs through vCenter.
NVIDIA vComputeServer also enables GPU aggregation
GPU sharing should be game changing for most businesses interested in AI / ML, which should be almost every company today. But vCompute Server also supports GPU aggregation, which enables a VM to access more than one GPU, which is often a requirement for compute intensive workloads. vComputeServer supports multi-vGPU and peer-to-peer computing. The difference between the two is that with multi-vGPU, the GPUs can be distributed and not connected; with peer-to-peer, the GPUs are connected using NVIDIA’s NVLink, which makes multiple GPUs look like a single, more powerful GPU.
A few years ago, the use of GPUs was limited to a handful of niche workloads performed by specialized teams. The more data-driven companies become, the more GPU-accelerated processes will play a key role in not just artificial intelligence but also day-to-day operational intelligence.
Together, VMware and NVIDIA have created a way for companies to get started with AI, data sciences and machine learning without having to break the bank.
Zeus Kerravala is an eWEEK regular contributor and the founder and principal analyst with ZK Research. He spent 10 years at Yankee Group and prior to that held a number of corporate IT positions.