Of all the companies I follow, NVIDIA is currently leading in terms of putting on large-scale events. Part of the reason is that the company makes tools that lend themselves to video presentations, and they use those tools to produce their virtual events. I’m covering the keynote for NVIDIA’s latest GPU Technology Conference (GTC), and I am again struck by the incredibly high production quality levels, image richness and visual excitement unique to NVIDIA. This year, NVIDIA is the only firm that has found a way to make its virtual event—at least concerning major sessions like this keynote—better than an in-person event.
Let’s cover both the method and the content from NVIDIA’s GTC keynote this week.
CEO Jensen Huang is a high-energy speaker who doesn’t like to rehearse. This lack of rehearsal approach can lead conventional in-person events to talks that significantly run overtime and lots of content that isn’t of interest to the audience. For instance, in the CES keynote one year, much of the content was on autonomous cars, which at the time weren’t of interest to most of the consumer device-focused audience.
But with these virtual events, not only does NVIDIA use lots of videos to enrich the talk, its event team broke it up into short topic-focused segments so that the audience can pick the topics they want and not suffer through long segments that have little interest for them. It is a better use of their time; they are more likely to retain the information, and the production crew can do retakes to ensure each segment’s quality. Jensen is tight, focused and comes across as far more professional using this approach reflecting well on Jensen, NVIDIA and showcased products.
What is also interesting is that in NVIDIA’s keynote, the CEO doesn’t initially take the stage; what does is NVIDIA’s technology and vision. In effect, NVIDIA itself sets the stage with a video somewhat similar to Corning’s “Day Made Of Glass.” I often find firms struggle with conveying just what they do, and this opening video does a great job of that. After less than a 3.5-minute sport, you have an excellent idea of what NVIDIA does and its impact. Everyone should open with a video that is a showcase for their firm and the products they have and are announcing.
In this case, the video spanned NVIDIA’s work on AI and creators and the solutions that are specifically targeting the COVID-19 Pandemic. Now let’s move to the content.
Setting the tone
Jensen opens up with an interesting statement: “AI is the automation of automation,” and it is. AI is, in its ultimate form, systems increasingly run and managed by other systems. It is potentially a level of scale that will define the next industrial revolution, one we aren’t quite into yet. Jensen opened with the firm’s focus on AI and a series of stats, including 2 million developers currently working on NVIDIA’s platforms.
Jensen’s next section was on Ampere graphics cards. These cards and the related technology remain at NVIDIA’s heart, and Ampere represents the second generation of their RTX platform. These cards remain sold out but showcase the market’s move to Ray Tracing.
Metaverse, out of the book “Snow Crash,” is a virtual reality form of the internet where people virtually experience new things, collaborate, interact and compete. Omniverse is NVIDIA’s platform to help create the future Metaverse. It isn’t just for people; Omniverse is also a place for virtual robots to be created in abstract and trained before manufactured in the real world. The Metaverse is likely where our next set of robotic servants will be born. (It should be noted that they used a video to showcase this capability and convey a vision of this coming future).
AI to fight disease
Bringing the talk to current events, Jensen pivoted to how NVIDIA’s technology is being used to fight diseases like COVID-19. He opened talking about the billions of dollars spent on research and how 90% of that money goes to projects that ultimately fail—the cost of developing a drug doubles every nine years. Using simulation in NVIDIA’s Omniverse, much of the work can be done before physical testing begins, lowering the cost of experimenting and increasing the success rate at the end.
Pharmacokinetics studies how potential drugs interact with the body. Now, much of the work in determining the outcome of a medication can be done in simulation. This testing can be done before trials to increase the trial success rate while lowering the risk to those in the trial. This process identifies potential remedies and can identify which candidates will most likely benefit from the resulting cure.
NVIDIA announced Clara Discovery was created to lead to the faster and more effective discovery of life-saving drugs. To help to more rapidly respond to pandemics and other diseases in the UK, NVIDIA is partnering to create a focused supercomputer called Cambridge-1. Supporting both academia and private industry, this effort will be one of the most potent medically focused efforts in the world. But the concept of doing virtual medical testing could be industry-changing because you can do thousands of virtual tests in the time it would take to do one human test. It is already having a substantial positive impact on the current Pandemic; it may prevent the next one.
The DGX Superpod, which is available now, allows a research entity to scale up a simulation solution relatively quickly with the potential for a massive payback once the DGX Superpod is entered into the testing process. It takes just a few weeks to train a researcher on this new system. The University of Florida, India, Argonne National lab, Wallenberg Lab, Facebook Research and Korea’s #1 search engine are all spinning up Superpods.
AI software that writes software
AI is advancing very quickly; one example is that AI is increasingly able to write its software. The scalability of this approach has started a race to create the most powerful AI in the world. The computations needed to create a current generation AI has been doubling at an incredible rate, 30x over the last five years. The resulting systems are pushing every aspect of the computer, processing, transport, storage and memory. One of the exciting tools NVIDIA has created, the Transfer Learning Tool, lets customers take a pre-trained model and apply it to their own data sets. This effort is computers building computers and a significant step down the path of self-replicating AIs.
Dominating inference-supercharging Microsoft Office
NGC is NVIDIA’s Cloud for containerized stacks with 1 million downloads in just a couple of years. NGC is now available on Azure, AWS and GCP marketplaces. (This is the 10th anniversary of the first NVIDIA GPU in the AWS cloud; it is now in every central cloud platform). NVIDIA estimates that the aggregate compute throughput has been increasing 10x every two years. NVIDIA is projected to reach a 90% market share in a few years if current growth rates are sustained. One interesting new application is that Microsoft is applying NVIDIA’s capability to Office 365. The first features are smart grammar correction, Q&A and text prediction, and this could have a significant impact on productivity for folks like me who write a lot.
Cybercrime costs the global economy around a trillion dollars a year. American Express is using NVIDIA AI to look at tens of millions of transactions daily to detect fraud. Microservices like voice recommendations are used by a wide variety of applications at the same time. He demonstrated Speech Squad on Jarvis vs. a CPU for a voice query, much like you’d do on a digital assistant. This application dropped the latency in half and delivered a much more human-like response at a third of the cost of a CPU-based solution.
Jarvis, NVIDIA’s state of the art conversational AI application, is now in open beta. It currently owns the record in terms of speech recognition speed. He then announced NVIDIA Maxine, which reduces the bandwidth necessary for a video call significantly and makes it look like you are looking at the other person even when you are not. It can also animate an avatar-based only on your voice in real time, improve your lighting virtually, and do real-time language translation.
AI for every company
One of the most powerful AI tools is AI recommender. This tool class is what gets us to the products and services we may want to buy; they increase conversion rates, improve customer satisfaction and increase engagement. A good recommender learns your explicit and implicit wants, needs and sales vulnerabilities. Recommenders are extreme data processing and machine-learning services. They use a mix of structured and unstructured data to determine the user’s preferences. The more predictive of this tool, the higher the clickthrough rates and the more queries are converted to sales. NVIDIA Merlin is the first end-to-end recommender engine ever created. It goes from the data lakes, through training, to inference. These recommenders consume entire data centers, and they are evolving.
Merlin is now in open beta, and it is in production at Tencent. Merlin’s core engine is NVIDIA Rapids, and it is already demonstrating significant increases in learning efficiency and recommendation accuracy. Rapids are the fastest ETL engine on the planet and it is based on the most popular data science frameworks. On the industry-standard data processing benchmark running 30 complex database queries on a 10TB data set, a 16 DGX cluster ran 20x faster but cost 1/7th the cost and 1/3rd the power over a CPU based solution. Capital One is using Rapids for their fraud detection.
Cloudera announced the Cloudera Enterprise Data Cloud platform based on NVIDIA Rapids this week, and this solution would be available to Cloudara’s 2,000 customers. NVIDIA also announced a new partnership with VMWare for Accelerated AI computing.
Trillions of intelligent things
These elements include robots working alongside people, autonomous cars, sensors (including cameras), digital assistants and an increasing host of anonymous devices. To address this need, NVIDIA announced the EGX Edge AI platform. This simple and relatively inexpensive inference platform is designed to create an iPhone moment for the world of connected, increasingly intelligent devices. Jenson took us through implementations that already include smart warehouses (highly automated), customer check out at supermarkets and patent monitoring in health care. This session showcased the cutting edge of where we are with Intelligent things and where we soon will be.
For some time, I’ve been arguing the next big thing in technology is likely to be robotics. Of the tech vendors, only NVIDIA seems to be genuinely taking this market seriously. At this event, NVIDIA announced the Jeston Nano 2GB, a $59 Robotics Starter kit that, with the models NVIDIA had created, could help any company build lower cost and more highly capable robots. BMW worked with NVIDIA to create a highly automated factory of the future. Interestingly, the demonstration was a factory building robots, not cars. And not just any robots, but robots that might be released into the consumer market. It fascinates me how engaged the carmakers who aren’t in the U.S. are in these developments. He moved to an autonomous driving demonstration with Mercedes and what appears to be at least a level-4 class solution for self-driving using NVIDIA’s Drive AV solution.
Computing for the Age of AI: The ARM Acquisition
NVIDIA announced a major ARM initiative at the event, and it came in three parts. First, they complement ARM partners with GPU, networking, storage, and security technologies to create complete accelerated platforms. Second, they work with partners to create platforms for HPC, Cloud, Edge, and PCs. Third, NVIDIA will port the NVIDIA and RTX platforms to ARM; up until now, they were only x86. That’ll shake things up.
NVIDIA GTC is the one show where you genuinely get a sense of the cutting edge of technologies that are and will change our everyday lives as we move into the next industrial revolution. Centered around AI and autonomous devices but including advancements in medicine, transportation, perception and imagination, the company remains on the cutting edge of change. If you want to ride the next wave of innovation rather and avoid becoming a sad footnote in history, this again was the must-see event of the year.
The keynote ended with an orchestrated concert written by an AI app, anticipating a future where increasingly intelligent systems will create much of what we see and hear. The world of tomorrow is coming, I got a good look at it this week, and it was and will be amazing!
Rob Enderle is a principal at Enderle Group. He is a nationally recognized analyst and a longtime contributor to eWEEK and Pund-IT.