AMD Aims to Boost APU Energy Efficiency 25-Fold by 2020
Officials with Advanced Micro Devices raised some eyebrows this week when they said the company intends to improve the energy efficiency of their accelerated processing units 25-fold by 2020.
Such gains would greatly outpace what the company has done over the previous six years, when it improved the energy efficiency of its APUs in typical computing scenarios by more than 10 times. Speaking at the China International Software and Information Service Fair (CISIS) conference June 19, AMD CTO Mark Papermaster said energy efficiency is central to the company's development processes.
"Creating differentiated low-power products is a key element of our business strategy, with an attending relentless focus on energy efficiency," Papermaster said in a statement. "Through APU architectural enhancements and intelligent power-efficient techniques, our customers can expect to see us dramatically improve the energy efficiency of our processors during the next several years."
The company's "25X20" goal is a testament to that commitment, he said.
Technology worldwide consumes an awful lot of power, according to AMD. Three billion personal computers use more than 1 percent of all the energy consumed every year while 30 million servers use another 1.5 percent of electricity, all at an annual cost of between $14 billion and $18 billion, the company said. That will only increase as the use of mobile devices continues to expand and the Internet of things continues to grow.
AMD for several years has made energy efficiency a key part of its overall strategy, with officials often touting the performance-per-watt capabilities of its chips. Creating its accelerated processing unit (APU) architecture—with the CPU and graphics technology residing on the same piece of silicon—was a key step in that direction.
Now, company officials believe that, with the right mix of cutting-edge power management capabilities, advances in the architecture of the APUs, improvements to semiconductor manufacturing processes and a focus on what they call "typical power use," AMD will be able to increase the power efficiency of its chips at a pace that exceeds Moore's Law by at least 70 percent. While Moore's Law states that the number of transistors in processors essentially doubles every two years, research has shown that the energy efficiency of those processors has tracked closely to the rate of improvement predicted by Moore's Law, AMD officials said.
A key to AMD's plans lies in what officials call heterogeneous computing—in which the CPU and GPU share the chip space with such special-purpose accelerators like digital signal processors and video encoders. At the same time, in a heterogeneous system architecture (HSA), the CPU and GPU share access to the same memory, and because the CPU and GPU are viewed by the system as a single processor, workloads are easily moved to whichever one is most needed.
Officials with AMD—which helped found the HSA Foundation—said that eliminating connections between discrete chips, treating the CPU and GPU as peers, and shifting the workloads to the optimum processing component all save power and accelerate workload performance.
Intelligent power management also will be key, they said, given that most computing operations are in idle time—such as the interval between key strokes or touch inputs, and the time spent reviewing content that is being displayed. If systems can execute tasks as quickly as possible, then minimize the power used during idle time, energy efficiency can be increased, according to officials. AMD's latest APUs analyze the workloads and applications in real time and dynamically adjust clock speed to ensure optimal throughput rates. In addition, the processors also can overclock their speeds to do a job quickly and then drop back into low-power idle mode.
AMD officials also noted that the company has been working on power-efficiency technologies for many years, and that going forward, more innovations will emerge, from inter-frame power gating and per-part adaptive voltage to voltage islands and continued integration of system components.