SAN FRANCISCO—By 2015, Intel may deliver processors with tens or even hundreds of individual cores, Intel researchers said Thursday at the Intel Developer Forum here.
This “many core” strategy may support hundreds or thousands of instruction threads, which may create their own problems of managing them with the appropriate compiler software, executives said. Intel Corp.s top researcher also indicated that future Intel microprocessor designs will steal a page from rival AMD (Advanced Micro Devices Inc.)
The last day of the Intel Developer Forum is typically marked by a sci-fi look at the future, and Intel researchers debuted “super-resolution” video techniques, personal communicators, and the parallelism techniques for multiprocessor systems.
The company also described an important breakthrough: the first laser implemented in standard CMOS silicon, and a critical step in developing future optical interconnects.
Justin Rattner, an Intel senior fellow in the Corporate Technology Group who has replaced Pat Gelsinger as the public face of Intels research division, said that Intels research teams are forced to conceptualize ideas 10 years into the future, to make sure that they become products in time.
Rattner compared Intels platforms evolution to that of the peppered moth, whose lightly colored wings grew darker during the Industrial Revolution to match the soot being deposited on trees and buildings. He said Intel works with users and customers to determine the direction of the companys evolution, to make sure that the company is in step with their needs.
“The key to this on our platform is to never stop evolving. If you stop evolving, you die,” Rattner said.
The theme of IDF, however, has been a deeper exploration of Intels dual-core plans, and Rattner said Intel scientists have already begun thinking of how to anticipate and solve problems as the technology develops.
Two of these include finding the compiler software needed to send a balanced load of instructions to the individual cores, as well as transferring information between them and the rest of the system.
In a project code-named “Shangri-la,” Intel researchers used a language named “Baker” to manage the flow of data in a series of eight simulated cores, each managing eight instruction threads.
Intel uses compiler software to translate instructions coded in C+ and other languages into the machine instructions used by the cores. Baker, apparently, will place even more intelligence in the compiler software, using the Shangri-la runtime to route instructions and even take unused cores offline to save power.
Todays chips transfer data by way of pins, the tiny pieces of metal along the chips edge. As data rates reach higher and higher, the number of pins required will exceed the physical space along the edge of the die, Rattner said.
To solve this, Intel is considering two alternatives: “3-D stacking” a dedicated memory wafer on top of the logic wafer, as well as die stacking, a more conventional technique.
In 3-D stacking, an entire wafer would be bonded on top of another. Normally, a 300-mm wafer is fabricated as a discrete unit, and the microprocessor dice etched into its surface are removed and packaged.
In Intels research scenario, however, a second DRAM (dynamic RAM)-dedicated wafer would be bonded to the top of the microprocessor, creating millions of possible connections. In this case, on-chip memory controllers would be required, a tactic already employed by AMD.
Rattner said it was “inevitable” that Intels future multicore designs will emulate one element of rival AMDs processors and use an on-chip memory controller.
“You have to have an on-chip memory controller—theres no place else to go,” Rattner said, adding that he expected that there would be a “rather large” number of memory controllers built directly into the die.
The other alternative Intel is considering is stacking the processor dice directly on top of one another, Rattner said, similar to the way in which Intel stacks flash chips one on top of the other.
Silicon photonics will be another way that Intel solves the data problem, at least between chips. Optical fiber is already used in the Fibre Channel connection, for example. A few weeks ago, however, Intel created the first continuous-wave silicon laser, the source of the signal, which Rattner said was “truly a breakthrough.”
Rattner also invited Intel researchers on stage to show off three of Intels ongoing projects: a personal communicator that serves as an adjunct to the notebook PC; a research technique that uses computational power to improve the resolution of recorded video; and a project to virtualize not just the processor, but also other components in the system.
In one technique, a researcher took several pictures of a still image and compared them with a grainy, low-resolution video of the same object. While the video, taken with a cell phone, looked horrible when scaling up to normal resolution, the images used mathematical interpolation to create a clearer image. The same technique eventually could be used on the fly to improve the apparent resolution of prerecorded video.
A second presentation took the virtualization concept a step further. In a demonstration, an Intel integrated graphics controller was shared between two PCs. Future work will include storage and networking, Rattner said.