eWEEK at 30: Multicore CPUs Keep Chip Makers in Step With Moore's Law

 
 
By Jeff Burt  |  Posted 2014-02-20
 
 
 

eWEEK at 30: Multicore CPUs Keep Chip Makers in Step With Moore's Law


At the Intel Developer Forum in 2001, Paul Otellini stood on stage and showed off a Pentium 4 desktop PC chip running at 3.5GHz. At the time, Intel's fastest chip ran at 2GHz, but the company's plans called for a rapid increase in the frequency of its processors.

"Yesterday, we showed a 2GHz processor," said Otellini, who at the time was an executive vice president and general manager of the Intel Architecture Group and who later became the company's CEO. "Today, we showed a 3.5GHz processor. A 4GHz processor is on the horizon. We're convinced that we can scale the Pentium 4 to 10GHz."

The plan at the time was to crank up the frequency of the chips as the company grew the number of transistors in the processors. In 2000, Intel's fastest Pentium 4—at 1.5GHz—contained 42 million transistors. By 2005, the projections called for chips holding 400 million transistors and running at speeds approaching 10GHz.

At the time, ramping up the speed of the processor was the chief way of increasing its performance. There were other tweaks here and there, such as playing with the cache or tweaking the instructions, but the primary way was through its frequency.

However, even while Otellini and others at Intel boasted about how processor frequencies would continue to increase rapidly over the following years, officials with Intel and other chip makers also were beginning to talk about the problems that arise at such speed, in particular heat generation and power consumption.

That same year, Pat Gelsinger—at the time, the chief architect for Intel's processors and now CEO of VMware—noted that should chip designs continue on that path, they would become as hot as nuclear reactors by the end of the decade and as hot as the sun's surface by 2015. As far as chip development and design went, it was "no longer business as usual," he said, speaking at the IEEE's International Solid State Circuits conference that year. "No one wants to carry a nuclear reactor in their laptop onto a plane."

During his talk, Gelsinger raised a couple of options that already were beginning to be used by two other chip makers, IBM and Sun Microsystems: running multiple instruction threads on the chip and assembling multiple processing cores on a single chip.

If Intel, Advanced Micro Devices and others were to keep up with customer demands for ever more power and the cadence of Moore's Law, it was clear that something needed to be done beyond pushing the frequency.

"There was a constant demand from end users for more performance," Nathan Brookwood, principal analyst for Insight 64, told eWEEK. "To keep performance going on its growth curve, you had to go multicore."

eWEEK at 30: Multicore CPUs Keep Chip Makers in Step With Moore's Law


Atiq Bajwa, director of microprocessor architecture for Intel's Platform Engineering Group, said changes had to be made.

"We were pushing frequency very, very hard," Bajwa said in an interview with eWEEK. "It was clear that thermal limits would be an issue if we kept up on that trend."

In 1965, Gordon Moore, a co-founder of Intel, said that the number of transistors on a chip will essentially double every year, a statement that became Moore's Law. It later was amended to about every 18 months, to refer to the chip's performance. However, Moore's Law over the past few decades has becoming a driving principle behind chip development, and many expect this trend to continue for at least a couple more decades.

According to Intel's Bajwa, frequency was the primary way of increasing performance, but not the only way. The micro-architecture could be manipulated—instructions tweaked, more cache added and data paths widened, for example. Other changes could be made as well, such as improving the bandwidth and reducing the latency to memory. Superscaler architectures—enabling a single-core chip to execute multiple instructions—also were used by Intel, AMD and RISC chip makers.

However, as the chips got smaller and faster, the issues of power, heat and efficiency continued to grow. By the late 1990s, chip designers were mapping out ways to put two or more processing cores on a single piece of silicon. The idea was that chip makers could continue pushing forward the performance of the chip by adding additional cores, while reducing the frequency of those cores and keeping power consumption in check.

IBM, looking to push past such rivals as Sun, Hewlett-Packard and Digital Equipment Corp. in the Unix server space, in 2001 introduced the Power4 chip, a 1GHz processor that was the first to offer two cores on a single die. The first system to run Power4, a system called Regatta, more than doubled the performance of competing systems at half the price, according to IBM officials.

Sun also rolled out multicore UltraSPARC chips, eventually putting as many as eight cores into the architecture by 2007.

Intel and AMD each launched their first multicore chips in 2005. Intel launched the dual-core 3.2GHz Pentium Extreme Edition 840 processor and 955X Express chip set in April of that year, followed later by dual-core versions of its Xeon and Itanium server products.

AMD, which had become a more formidable competitor to Intel two years earlier with the first release of its 64-bit x86 Opteron server chips, rolled out its dual-core Opteron 800 and Athlon 64 X2 processors within a week of Intel's dual-core processor announcements.

eWEEK at 30: Multicore CPUs Keep Chip Makers in Step With Moore's Law


AMD had a plan in place for eventually getting to two cores with its initial Opterons, according to Insight 64's Brookwood. The company had designed the Opteron to make an easy transition to dual-core as the transistors shrunk, and two years after releasing its first Opteron, a two-core version was ready.

"AMD had a much cleaner architecture solution, and it allowed AMD to have an advantage over Intel [initially]," he said.

Intel's initial dual-core chip was essentially a two-die, multichip package with a shared memory interface, Brookwood said. And where Intel initially relied on a front-side bus to connect to memory, AMD's chips offered an integrated memory controller, which officials said gave the Opteron and Athlon chips a performance advantage over the Intel products. However, Intel officials have argued that the decision to stay at first with the front-side bus was made to address the balance needed in the chips between the cores and the bandwidth to the memory and I/O in order to realize the best performance.

Intel has since put integrated memory controllers into its chips and also uses the QuickPath Interconnect technology in conjunction with the controllers for better performance and scalability.

Since the first x86 dual-core chips were introduced in 2005, the number of cores on processors has grown rapidly. AMD officials in January launched new Opterons in the 6300 family that hold 12 and 16 cores. Meanwhile, Intel executives on Feb. 18 unveiled the high-end Xeon E7 v2 "Ivytown" server chips, which offer up to 15 cores and hold more than 4.3 billion transistors.

Initially, the biggest challenge for multicore chips was not the hardware but the software, according to Brookwood. There were few applications then that could take advantage of chips with multiple cores, so organizations weren't always seeing performance gains in their software by using systems with dual cores.

"You would run [the application] on a multicore machine and get half the performance of the multicore machine because the second core was sitting there just twiddling its thumbs," he said. "The issues in adopting multicore processors are almost entirely software-related."

Now most software is written to take advantage of multicore chips, Brookwood said. Even the low-end servers have two to four cores in them, and most PCs also run multicore chips. Vendors are even making chips for smartphones that have as many as four cores, and future plans are calling for eight or more.

Intel's Bajwa said that in the server world, a lot of cores make sense.

"In the server space, a lot of applications can benefit from a lot of cores," he said. "The server case is fairly solid, and the ecosystem can use them quite well."

eWEEK at 30: Multicore CPUs Keep Chip Makers in Step With Moore's Law


In the PC space, it's not as common for applications to use four or more cores, and it's even less so in smartphones, he said. Still, the future track is for even more cores for chips that run in all computing devices, Bajwa said.

In the high-performance computing space, organizations run highly parallel applications that can take advantage not only of processors with large numbers of cores, but also GPU accelerators that can contain hundreds of cores and can be used to run workloads that have been offloaded from the main processor.

In addition, Intel's x86-based Xeon Phi coprocessors, which run in systems with Xeon chips and essentially have the same job as the GPU accelerators, hold more than 60 cores. Intel's upcoming 14-nanometer "Knights Landing" Xeon Phi chips not only will be able to be used as coprocessors, but also as the primary processors, according to Intel officials.

The number of cores will continue growing in the mobile space as well. Most recently, ARM—which designs systems-on-a-chip (SoCs) and licenses those designs to manufacturers like Samsung and Qualcomm—introduced its big.Little architecture in 2011 as a way of addressing the sometimes conflicting user demands for more performance and longer battery life by pairing low-power Cortex-A7 cores with higher performing cores on the same SoC. The Cortex-A7 is used for basic tasks, while the larger cores are used with more compute-intensive jobs.

ARM in February announced its upcoming Cortex-A17 design, which will begin appearing in devices in 2015. The same day, MediaTek announced its upcoming MT6595 SoC design that will leverage the Cortex-A17 in a big.Little configuration for an eight-core mobile chip that will combine four Cortex-A17 cores with four Cortex-A7 cores.

But as those cores are being added, there are challenges that will have to be dealt with, Intel's Bajwa said. One of the biggest challenges for chip engineers will be keeping balance in the processors. As more cores are added, the chips will be able to process and execute increasingly large numbers of workloads and data. However, engineers also have to ensure that there continues to be enough bandwidth to the memory and the I/O to ensure that applications can take full advantage of all the cores. Heat also will continue to be a challenge, as will power management, he said, bringing the industry back to the problems that the multicore design dealt with a decade ago.

"The trend is toward more and more compute with more threads and … with more cores," Bajwa said. "My sense is the cores are going to continue to grow over time."

Rocket Fuel