Tera-scale Computing: Intels Attack of the Cores

 
 
By John G. Spooner  |  Posted 2006-06-19 Email Print this article Print
 
 
 
 
 
 
 

Intel researchers show how using simple processor cores can present a radically different approach to building processors. Can this stab at the future go mainstream?

Intel is about to deliver the opening salvo in a wave of multicore processors that could ultimately lead to chips with scores of cores aboard. The chip maker will begin the rollout of its Core Microarchitecture—new chip circuitry that emphasizes power efficiency—June 26 with the arrival of the dual-core "Woodcrest" Xeon 5100 series server chip.
But Intel researchers, speaking at the VLSI Symposium June 15, said that they have already seen results with projects associated with its Tera-scale Computing effort to explore processors containing tens or even hundreds of cores.
Intel has already implied that it is aiming for processors with more than 10 processor cores by the end of the decade. However, Tera-scale chips would look and act differently. They would be built from relatively simple general-purpose IA (Intel Architecture) x86 processor cores—with the potential to include specialized cores for some jobs—to boost performance by dividing up jobs and running them in parallel. Tera-scale chips would use semiconductor design laws—which state that smaller, slower cores tend to use less power—to meet businesses needs for performance, while acknowledging concerns about matters like server power consumption.
Click here to read more about Intels plans to speed chip architecture redesigns. "Theres this advantage to simplifying the individual [processor] core, accepting the reduction in single-thread performance, while positioning yourself, because of the power reduction, to put more cores on the die," said Intel CTO Justin Rattner, in Hillsboro, Ore. "Thats the energy-efficiency proposition of Tera-scale. Less is more, actually, in the case of a Tera-scale machine, because the underlying core efficiency is better than the cores weve been introducing this year." Tera-scale chips would be particularly good for jobs requiring the processing of large amounts of data, such as computer visualization or using gestures to control a computer, or more business-oriented applications like data mining. But extracting the true performance potential of such a new approach wont be possible without improving chip technologies, including boosting onboard memory caches, creating high-speed interconnects for distributing data, and more efficient clock timing systems. Nor will it be successful without getting software developers, many of whom are just now starting to tackle the move from single-thread applications to multi-threaded applications, on board, Intel executives said. "Every time you increase the number of threads, youre putting greater burden on the programmers to write the applications … to actually harness all that available parallelism," Rattner said. Next Page: Radical change.



 
 
 
 
John G. Spooner John G. Spooner, a senior writer for eWeek, chronicles the PC industry, in addition to covering semiconductors and, on occasion, automotive technology. Prior to joining eWeek in 2005, Mr. Spooner spent more than four years as a staff writer for CNET News.com, where he covered computer hardware. He has also worked as a staff writer for ZDNET News.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel