Tera-scale Computing: Intels Attack of the Cores

 
 
By John G. Spooner  |  Posted 2006-06-19
 
 
 

Tera-scale Computing: Intels Attack of the Cores


Intel is about to deliver the opening salvo in a wave of multicore processors that could ultimately lead to chips with scores of cores aboard.

The chip maker will begin the rollout of its Core Microarchitecture—new chip circuitry that emphasizes power efficiency—June 26 with the arrival of the dual-core "Woodcrest" Xeon 5100 series server chip.

But Intel researchers, speaking at the VLSI Symposium June 15, said that they have already seen results with projects associated with its Tera-scale Computing effort to explore processors containing tens or even hundreds of cores.

Intel has already implied that it is aiming for processors with more than 10 processor cores by the end of the decade.

However, Tera-scale chips would look and act differently. They would be built from relatively simple general-purpose IA (Intel Architecture) x86 processor cores—with the potential to include specialized cores for some jobs—to boost performance by dividing up jobs and running them in parallel.

Tera-scale chips would use semiconductor design laws—which state that smaller, slower cores tend to use less power—to meet businesses needs for performance, while acknowledging concerns about matters like server power consumption.

Click here to read more about Intels plans to speed chip architecture redesigns.

"Theres this advantage to simplifying the individual [processor] core, accepting the reduction in single-thread performance, while positioning yourself, because of the power reduction, to put more cores on the die," said Intel CTO Justin Rattner, in Hillsboro, Ore.

"Thats the energy-efficiency proposition of Tera-scale. Less is more, actually, in the case of a Tera-scale machine, because the underlying core efficiency is better than the cores weve been introducing this year."

Tera-scale chips would be particularly good for jobs requiring the processing of large amounts of data, such as computer visualization or using gestures to control a computer, or more business-oriented applications like data mining.

But extracting the true performance potential of such a new approach wont be possible without improving chip technologies, including boosting onboard memory caches, creating high-speed interconnects for distributing data, and more efficient clock timing systems.

Nor will it be successful without getting software developers, many of whom are just now starting to tackle the move from single-thread applications to multi-threaded applications, on board, Intel executives said.

"Every time you increase the number of threads, youre putting greater burden on the programmers to write the applications … to actually harness all that available parallelism," Rattner said.

Next Page: Radical change.

Radical Change


The Tera-scale approach is a radical change from Intels Xeon 5100, which uses two complex processor cores.

But one of the driving forces behind the Tera-scale research is the fact that chip transistor counts, already in the billions, will continue to double over time.

Intel chips will approach 32 billion transistors by the end of the decade, researchers said.

The rising transistors numbers give Intel the option to go with large numbers of smaller cores without radically increasing chip area.

Thus far, Intel and others have used the extra transistors to create more complex chips with larger onboard memory caches.

But while the current approach brings increases in instruction processing or work done per clock, that doesnt mean youre getting a commensurate increase in terms of overall efficiency, said Steve Pawlowski, chief technology officer for Intels Digital Enterprise Group in Hillsboro, Ore.

"One of the ways to get efficiency is you make the cores simpler and you do a lot of them and put them on the die. Thats where Tera-scale is coming in. Were saying, Hey, for a certain class of workloads, you can take advantage of this parallelism. You can have extremely efficient architectures because you can use more of [the cores]."

Shifting toward lots of simple cores—trading two Woodcrest cores for tens of 386-style cores—would greatly increase a chips parallel processing abilities, and thus offer more performance, analysts agreed. But it brings its own issues.

Click here to read more about Intels swifter transistors.

"The bigger question is, how do you take advantage of such a system?" said Dean McCarron, principal analyst with Mercury Research, in Cave Creek, Ariz.

"Not everything lends itself to that [many threads]. But, that said, everybody seems to be in agreement that this is the path were pretty much forced to go down."

But programming for Tera-scale chips will require a completely different approach that uses lots of different threads simultaneously.

Thats a concept only a few programmers are currently familiar with, Pawlowski said.

Next Page: Getting to work.

Getting to Work


So Intel is getting to work. In some cases, the company is working directly with large software makers.

Elsewhere, its Software Products Group is offering tools to assist programmers with multithreading, said James Reinders, director of marketing and business development for the groups Developer Products Division, also in Hillsboro.

The tools, including compilers, performance libraries, tuners and thread checkers, aim to address such challenges as scalability—how to make an application run faster on more than one core—correctness, or eliminating bugs, as well as ease of development.

"Were definitely seeing movement in attitudes of developers" toward multithreaded applications," Reinders said.

"Over the next five years, I think well see most developers take and interest in understanding parallelism more."

At least one company, MainConcept, a maker of video codecs, has already adopted multicore, said CEO Markus Monig, in Aachen, Germany.

MainConcept found that optimizing for dual-core chips using Intels tools gave it a performance edge.

Codecs run about "1.8 times faster on dual-core machines, because you can actually cut the picture into slices and feed them to the separate processors," Monig said. "For us, the shift to multicore development has been pretty dramatic."

He predicted others will follow. "Companies like us who are driven to be competitive … will have to," he said. "If your codec isnt fast, nobody will buy it. There can be lots of benefit [for] applications which take a lot of CPU power."

To be sure, despite extensive backing inside Intels Corporate Technology Group—about 80 projects and 40 percent of the groups researchers are involved in Tera-scale research in some manner—and the existence of several niche markets that could take advents of such technology today, Tera-scale may never fully come to light.

To be used in a production processor, Tera-Scale technology would have to first be adopted by an Intel product group. Not all of the companys research projects are.

Meanwhile, Intels PC processors are the companys bread and butter. Its executives are reluctant to make quick changes with them. Thus the company could switch directions and use a different technique to add more cores to its chips.

"Im really excited about Tera-Scale. Its just finding the right time to intersect" with Intels product lines, Pawlowski said. "It would not be prudent of me to jeopardize our high-volume product line on a technology that still has some gestation period to go through."

Check out eWEEK.coms for the latest news in desktop and notebook computing.

Rocket Fuel