For more than four decades, Moore’s Law has held true, thanks in large part to Intel first continuing to crank up the speed of its processors, and more recently by rapidly growing the number of processing cores on a single chip.
However, according to Bill Dally, Moore’s Law has reached its limit on traditional CPUs from the likes of Intel and Advanced Micro Devices, and needs a new way of doing things if it is to continue.
Not surprisingly, Dally-vice president and chief scientist at graphics chip maker Nvidia-believes the only salvation for Moore’s Law lies in moving from serial processing to parallel processing, and more specifically, from CPUs to GPUs.
In a column on Forbes.com posted April 29, Dally argues that the energy needs for the CPUs Intel and AMD are pushing out there are creating an environment where Moore’s Law can no longer continue.
“We have reached the limit of what is possible with one or more traditional, serial central processing units, or CPUs,” Dally wrote. “It is past time for the computing industry-and everyone who relies on it for continued improvements in productivity, economic growth and social progress-to take the leap into parallel processing.”
Moore’s Law sprung from a paper written by Intel co-founder Gordon Moore 45 years ago, in which he predicted that the number of transistors on a chip would double every 18 months, and thus the performance of the CPU also would double during that time.
However, what worked in the 1980s and 1990s is not working anymore, despite what Intel officials say, and a new way of computing must be adopted, Dally said.
In comparing serial processing with parallel processing, the Nvidia executive pointed to the task of counting the words of his column. In serial processing, one person would count each word. In parallel processing, each paragraph would be given to a different person, and the word counts from each graph would be added together.
As the demand for greater computer performance grows, the problems with the serial CPU architecture will become more apparent, and Moore’s Law will end, he said.
“[T]hese needs will not be met unless there is a fundamental change in our approach to computing,” Dally wrote. “The good news is that there is a way out of this crisis. Parallel computing can resurrect Moore’s Law and provide a platform for future economic growth and commercial innovation. The challenge is for the computing industry to drop practices that have been in use for decades and adapt to this new platform.”
There is now a need for energy-efficient systems that practice parallelism rather than serial processing, Dally said.
“A fundamental advantage of parallel computers is that they efficiently turn more transistors into more performance,” Dally wrote. “Doubling the number of processors causes many programs to go twice as fast. In contrast, doubling the number of transistors in a serial CPU results in a very modest increase in performance-at a tremendous expense in energy.
“More importantly, parallel computers, such as graphics processing units, or GPUs, enable continued scaling of computing performance in today’s energy-constrained environment. Every three years we can increase the number of transistors (and cores) by a factor of four. By running each core slightly slower, and hence more efficiently, we can more than triple performance at the same total power. This approach returns us to near historical scaling of computing performance.”
Nvidia has aggressively been pushing its GPU technology into more mainstream computing environments, particularly in such areas as HPC (high-performance computing). Nvidia in October 2009 introduced its new Fermi GPU architecture, which incorporates more than 3 billion transistors and 512 CUDA cores.
CUDA is the parallel computing engine for Nvidia’s GPUs.
AMD, through its ATI unit, also is looking to bring graphics computing more into the mainstream, and working on its Fusion strategy of offering full CPU and GPU capabilities on a single chip. For its part, Intel also is expected to continue growing the graphics capabilities of its processors.
Intel and Nvidia have been partners, but the relationship recently has been strained. Intel in February 2009 sued Nvidia, claiming a 2004 agreement between the two did not give Nvidia the right to develop chip sets for newer Intel chips, such as those developed with the “Nehalem” architecture. The suit is scheduled to go to trial this year.
The Federal Trade Commission is suing Intel for alleged uncompetitive practices, not only for its treatment of AMD but also in regards to Nvidia. Intel officials have denied the allegations.
Nvidia also has created a Website called “Intel’s Insides,” which offers a series of editorial-style one-panel cartoons mocking Intel’s various legal issues.