Making the Chips

 
 
By John G. Spooner  |  Posted 2006-03-09 Email Print this article Print
 
 
 
 
 
 
 


"Once youve got the ability to push the boundaries of how big you can make a chip and put that many transistors on it, it really opens up the boundaries of… what you can do with it—how many cores make sense, what you do with those cores."

Making the chips
If the project were to form the basis of its future chip architectures, Tera-Scale design principals could be shared across all of Intels processor lines, including handheld devices, ultramobile PCs, notebook PCs, desktops and servers by varying the numbers of general purpose cores in each use.
A server processor would be the best candidate for a larger number of general-purpose cores, including specialized XML processing cores, whereas a handheld chip might use fewer general-purpose cores. Chips for all of the categories might use TCP/IP processing and or cryptography cores, however. The general-purpose cores, for their part, would look different than Intels current processors as well. The companys Core Architecture, revealed this week, places emphasis on getting more work done and delivering greater energy efficiency than the NetBurst architecture, which is the architecture behind preceding chips like Pentium 4 and the dual-core Pentium D.
Intel has confirmed that, at a minimum, it will be capable of producing four-core chips with the architecture. However, the research project is taking researchers in the direction of a "totally different architecture," said Bob Crepps, a technology strategist in Intels Corporate Technology Group. Tera-Scale has Intel researchers devising much simpler processor cores, likely short-pipeline, in-order cores which would take up less room on a chip and crunch through data very quickly by dividing data up into chunks and processing it in parallel. Specialized cores for TCP/IP or cryptography could have their own specific design, which emphasize efficiency. A cryptography core, for one, might be designed to be wider than 64-bits, something that would allow it to rip through a 1024-bit or 2084-bit key more quickly than a general-purpose, 64-bit core, researchers said. Challenges would remain in keeping such large numbers of cores flush with data, likely leading to changes in cache design—Intel researchers are working on this as part of their configurable cache, 3D-stacked memory and high-bandwidth memory experiments—in addition to system platform changes to speed up system input/output. Processor power management and heat management would change as well, in part by varying the number of cores that were turned on at a given moment. General-purpose cores might turn on or off based on demand. Similarly, a cryptography core, if present, could be made to quickly awaken, make its calculations and then shut down. Cores could also work on data and, if they got too hot, hand it off to others, creating the data equivalent of a hot potato. The same scenario could ensure servers remain up-and-running by supporting an on-chip fail-over system, researchers said. Intel aims for new mobile PC category. Click here to read more. "It drastically changes how applications and operating systems would run," McVeigh said. "Now that you have the abundance of cores, you get away from the notion of having to schedule things on one or a few cores…and are able to be more efficient—almost dedicate parts of the cores and I/O and memory to dedicated tasks to provide for better reliability, better performance… so were not always having to balance the resources." Software would also present a major challenge in putting Tera-Scale-like processors into production. It would require Intel to work with software developers to encourage them to create software that can take advantage of the many-core chips. Software changes could allow for something called speculative multithreading, where software compliers could look for areas of parallelism in application code and break them up to process, researchers said. However, Intel has yet to disclose if its begun doing any such work with software makers. Ultimately, "This effort isnt just within our Microprocessor [Technology] Lab. It spans all of the other areas as well, because it has impacts on communications and systems—where youre dealing with the memory, dealing with virtualization, partitioning, trust and how those apply directly to Tera-Scale style platforms," McVeigh said. Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.


 
 
 
 
John G. Spooner John G. Spooner, a senior writer for eWeek, chronicles the PC industry, in addition to covering semiconductors and, on occasion, automotive technology. Prior to joining eWeek in 2005, Mr. Spooner spent more than four years as a staff writer for CNET News.com, where he covered computer hardware. He has also worked as a staff writer for ZDNET News.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel