Why NVIDIA Wins--and Will Continue To Win

eWEEK TREND ANALYSIS: Since its inception, NVIDIA has focused on the end user, whether it’s a developer or data scientist to ensure the buyers of their products can take full advantage of accelerated computing. This enables a wide range of use cases that aren’t possible to do with any kind of speed and scale with CPUs alone.

NVIDIA.sign

Last week I read this post on SeekingAlpha, in which the author’s thesis is that NVIDIA’s new Ampere architecture and graphics processing unit will struggle to compete with Intel’s future GPU. With all due respect to the author, Arne Verheyde, if he believes that, then it shows an extreme lack of understanding of what makes NVIDIA the company it is. 

I’ve talked to many analysts or industry experts that felt the same way as Verheyde, and I understand why. A company like Intel is massive and has had its way in silicon for years. But the fact is, Intel has never managed to build a GPU that’s even in the same neighborhood as NVIDIA’s.

Technology aside, even if Intel somehow stops bumbling in the area of GPUs and gets its act together, it’s still unlikely to catch NVIDIA as the GPU market leader's value proposition is about much more than the actual silicon. Since its inception, NVIDIA has focused on the end user, whether it’s a developer or data scientist to ensure the buyers of their products can take full advantage of accelerated computing. This enables a wide range of use cases that aren’t possible to do with any kind of speed and scale with CPUs alone. This includes things such as machine learning, artificial intelligence, ray tracing, gaming and a whole lot more.

Accelerated computing certainly starts with GPU, and if that’s all there was, I might also be of the opinion that NVIDIA’s day is coming. On top of the silicon NVIDIA layers its CUDA software stack, which is as important to accelerated computing as the GPU. This includes the acceleration libraries, algorithms, system software and the optimizations that are typically co-created with the application developers. 

Developer community has made NVIDIA what it is

In many ways, it’s the developers that have helped NVIDIA become the massive company it is today. It’s as important to accelerated computing as the previously mentioned Intel was to general purpose computing. NVIDIA’s GPU Technology Conference (GTC), held digitally this year, has evolved from a niche trade show to a major developer event. During his “kitchen keynote”, NVIDIA CEO Jensen Huang stated the company had announced 50 new software development kits across the three layers of their software stack.

-------------------------------------------------------------------------

Zeus.Kerravala.mug2020

Editor's note: Go here to see a 9-minute YouTube interview with author Zeus Kerravala providing some deeper information about this topic:

-------------------------------------------------------------------------

The first layer is the CUDA architecture that spans the entire NVIDIA product line. It’s not a language but a normalizing layer to make development consistent product to product. A developer that creates an app for CUDA will know that the entire installed base of NVIDIA GPUs, past, present and future, will run that app. The next layer is CUDA-X, the acceleration library, which optimizes performance for specific functions. The third layer is a set of domain specific libraries for things such as RTX (ray tracing), HPC (high performance computing), RAPIDS (data analytics) and so on.

The “stack” approach has made NVIDIA very popular with developers, and today the number of developers sits at a whopping 1.8 million, making it, by far the largest developer population for a GPU vendor. This year, even without a live GTC, 46,000 signed on to watch the digital version of the event. 

NVIDA has also invested a lot of time and energy into creating optimal system architectures. In addition to software, the company builds turnkey hardware systems that run the hardware. This can be a chip, such as Ampere, which is actually a GPU with an accelerator up to a full system such as its DGX server, a fully integrated set of servers designed for artificial intelligence. It includes tuned storage, processors and networking as well as all the necessary software to make AI “plug and play.”

Edge computing is the next frontier

It recently announced EGX, which is similar to DGX but optimized for edge computing. NVIDIA also has small form factors compute nodes, such as Drive, meant for self-driving cars. These are just a few examples of system-level architectures that take much of the heavy lifting out of working with NVIDIA. It’s important to note that the above examples are sold as products but are really meant to be more reference architectures, so it’s ecosystem partners can follow the design and sell their own products. For example, one could buy a Dell version of DGX, enabling companies to maintain vendor preference.

The final piece of the NVIDIA success puzzle is its ecosystem. The company has a massive number of partners that include auto manufacturers, server vendors, network suppliers, software companies, cloud providers and more. NVIDIA works closely with all of them to ensure anything released to the market works and works well. The benefit to the customer is they can spend more time innovating and almost no time trying to cobble products together. 

The run NVIDIA has been on for the past several years is likely to continue as the demand for accelerated computing grows. Intel, AMD and others have their sights set on taking a chunk out of them, but beating NVIDIA requires more than a better mousetrap. NVIDIA has a huge lead when it comes to ecosystem support, and that’s something that will take years for the competition to build. 

Zeus Kerravala is an eWEEK regular contributor and the founder and principal analyst with ZK Research. He spent 10 years at Yankee Group and prior to that held a number of corporate IT positions.