Nvidia Launches CUDA Support for ARM Server Chips
Coprocessors and GPU accelerators are gaining momentum in HPC. According to the Top500 list of the world's fastest supercomputers released June 17, 54 systems used one or the other, with 39 choosing Nvidia GPUs and three choosing AMD's ATI Radeon graphics products. Eleven used Intel's Xeon Phi, including China's Tianhe-2, the world's fastest system. IDC analysts, in a study released June 17, found that the number of HPC sites using coprocessors and accelerators doubled over the past two years, with Nvidia GPUs and Xeon Phi coprocessors in close competition. In addition to the CUDA 5.5 announcement, Nvidia officials also noted that GPU accelerators are being used to develop neural networks, computing environments that operate in similar fashion to the human brain, including by adapting their work to what they learn in doing their jobs. Google created a neural network that used 16,000 CPUs in 1,000 servers to create 1.7 billion parameters—connections similar to those between neurons in the brain. By contrast, Nvidia and researchers at Stanford University's Artificial Intelligence Lab created a network just as large with three servers using Nvidia GPU accelerators. Using 16 servers with GPU accelerators, they created a 11.2 billion-parameter neural network—6.5 times larger than the Google one.Big data is another area where systems with GPU accelerators are gaining interest because of the performance and energy efficiency, he said. The GPU accelerator "business is going into areas it hasn't gone into before, and going into markets it's not familiar with, and that's because of demand," he said.
Nvidia also listed other artificial intelligence labs that use its GPU accelerators, and said Nuance for the past four years has used neural networks with GPU accelerators to enable its speech-recognition technology to handle issues like accents and background noises, Nvidia's Kim said.