One of the recurring cycles in technology is that when a new problem comes up, existing technology is initially used to try to address it, but eventually something purpose-built comes about that performs significantly better. This was true with smartphones—initially they were more capable PDAs or two-way pagers before Apple rethought the smartphone. More recently, this was true with autonomous driving technology, which initially was based off PCs until NVIDIA created the Drive AGX platform, which was vastly better.
NVIDIA has done it again with its DGX SuperPod, a massive supercomputer for simulation, which costs in the neighborhood of $30 million and, in theory, obsoletes every other visual simulation platform at scale in the market. It should also significantly accelerate the time to market for autonomous trucks, particularly those made by Volvo.
Go here to see eWEEK’s listing for Top Linux Server Vendors.
Let me explain.
Simulation at Scale
One of the big issues with autonomous cars is training. Initially, the plan was to train these cars much like digital maps were initially created by driving the training platform on every road. This proved to be exceedingly slow, required a massive increase in effort for changing weather conditions and was inadequate for certain kinds of transitory problems such as deer strikes or kids running out from between cars. In addition, you had to do the training at the speed a car drives, regardless of how fast the training computer could operate. That approach was simply too physically limiting.
Given one of the primary drivers for self-driving cars is the elimination of accidents, there is little tolerance in the industry for a self-driving car that can’t adapt to every situation it is likely to encounter. One major accident, particularly early on, and the effort could be compromised for the rest of its likely resulting short life. So, the massive need for absolute safety drove a decision to shift to simulated driving to train the AIs.
With simulation, you can address the massive amount of diverse threats on today’s roads—from flat tires to semi-truck drivers who are driving unsafely to extreme weather events to human-based mistakes. And all the training can be done at machine speeds, with near infinite variables, using simulations of roads already in existing mapping systems.
But existing supercomputers designed to crunch massive amounts of data for things like weather forecasting just weren’t designed to handle emulating the video, LIDAR, radar and sensor feeds from many concurrent simulations that would need to be run.
Birth of the DGX SuperPod
Initially NVIDIA created the DGX platform because current workstation designs just weren’t designed for AI development; you simply need a degree of parallel processing that current workstations, regardless of configuration, can’t do. Because no one seemed interested in building an expensive product for a market that didn’t know they wanted it yet, NVIDIA built its own and filled a gap that the conventional market didn’t even really see.
What makes this interesting is NVIDIA needed a workstation like this to develop its own AIs, and it was filling a need it saw because it was a personal need that was unmet.
I suspect the DGX SuperPod, which is 96 DGX-2s linked together with extremely high-speed interconnects, came about the same way. NVIDIA needed a system that would emulate at scale, and no supercomputer on the market could do this job.
Wrapping Up: Into the Future
This is certainly an interesting story in that we see a repeating cycle where a need develops in the market that initially is addressed by existing hardware and software and eventually satisfied by something far more focused, powerful and interesting. But, the next phase of any computer development is to now find other problems this unique hardware can handle.
The DGX platform was created to build next-generation AIs and is optimized for massive amounts of sensor data, particularly visual sensor data. Now take 96 of these things, connect them, and you have a platform around 100X more powerful that could also be used to create a following generation of AIs—but also for any massive sensor-based application based around video and rich sensor data.
Other possible use cases: space exploration, security at a city level, real-time intelligence analysis, a level of potential real-time photographic analysis that could bring new life to aging satellites, aerodynamics research for transportation or high-end racing, and breakthroughs in areas like visual diagnosis or threat detection on a global scale.
We don’t fully realize how much change a system like this could drive, as we are only looking at the tip of the iceberg now. The NVIDIA SuperPod is, as a result, a harbinger of a very different, far more intelligent future. It could also eventually become the basis for the first real Holodeck because that was truly simulation at scale.
So, while this was developed to help build self-driving cars and trucks (Volvo announced it will be using this system for its autonomous truck efforts), I expect it will expand and change dramatically a vast number of markets once folks figure out just how powerful this puppy truly is.
Rob Enderle is a principal at Enderle Group. He is an award-winning analyst and a longtime contributor to QuinStreet publications and Pund-IT.