Nehalem, Virtualization"> SAN FRANCISCO—Although Intel will launch its 45-nanometer Penryn processor on Nov. 12, most of the talk of the 2007 Intel Developer Forum here Sept. 18-20 focused on the companys new microarchitecture called Nehalem, which is due in 2008. Justin Rattner, the Santa Clara, Calif., companys chief technology officer and an Intel senior fellow, talked with eWEEK Staff Writer Scott Ferguson about Nehalem and the companys road map.
Intels 45-nanometer Penryn family of processors is due out in November, but much of the talk at this years IDF focused on Nehalem. What can you tell us about the new architecture and what it means for Intel?
Nehalem is a significant evolution in our multicore strategy. We moved to a much more flexible and versatile underlying architecture, and that will let us offer Nehalem technology in a variety of different arrangements.
I think [Intel CEO] Paul [Otellini] alluded to that in his address. It will have one core to eight cores, and we never had that kind of flexibility, which means that developers and users will have access to the Nehalem technology up and down the product line in ways that we havent been able to do in the past.
Click here to read more about what Paul Otellini said at IDF.
Nehalem continues our improvements at the microarchitecture level and sort of builds on the momentum around Core and Core 2 architecture, so it continues to improve single-processor performance and improves energy efficiency [performance]. Its much more versatile from a power management point of view, and it will have new models that will take it very quickly in and out of low-power states. I guess from a scalability, energy efficiency point of view, its really a next-generation microprocessor. All of these dimensions are what we will be focusing on in the future.
One of the most notable additions to Nehalem is its QuickPath architecture that includes an integrated memory controller. Can you tell us more about that and how it compares to Advanced Micro Devices DirectConnect architecture?
In the spring of last year, Steve Pawlowski [an Intel senior fellow] gave a talk about tech insight and that question of an integrated memory controller came up, and I think Steve said accurately that we were perfectly aware of integrated memory controllers and we [Intel] had developed integrated memory controllers and we have killed more designs with integrated memory control than AMD has built, so our view was that integrated memory control was not required to deliver high performance.
I think the press began equating integrated memory control with high performance, and I think with Core 2 we retained the front-side bus architecture and we beat the pants off of everybody without the integrated memory controller. The argument was that it was an engineering decision and at some point it will make sense to integrate the memory controller, and when that happens we will. We looked at that trade-off with Nehalem and we decided for that generation it made sense.
The analogy I use is to look at the copper microprocessor used in [IBMs] PowerPC. They never delivered a microprocessor that even ran at half the clock speed of the then-available Intel processor. So if copper was so almighty important, why didnt it deliver? The metallization technology was not the determining factor of chip performance at that time. The transistors needed to get faster before the wires got faster. IBM made the wire faster but not the transistors, so it was like having a Formula One car between stoplights. We have copper technology and we worked on it for many years, and then we decided that the transistors were running fast enough to make the wires run faster. We then put copper into production, but we didnt deploy it until we needed it.
What are some of the technical challenges for Intel as its chips move from 65-nanometer to 45-nanometer and ultimately down to 32-nanometer?
I tend to think that 45[-nanometer] is the real significant jump. I think that a lot of people talk about the end of Moores Law, and we had Gordon Moore here [Sept. 18 to say Moores Law will end sometime], but we completely changed the transistor architecture at 45 nanometers. We abandoned 40 years of development with the silicon gate MOS transistor and replaced the silicon in the gate and replaced the silicon dioxide in the insulating layer and yet stayed right on schedule in terms of brining this technology to market.
So, talk about the challenge of shrinking line widths. It was hard to abandon 40-year-old transistor architecture for a completely new one. I think that doesnt get enough attention.
If we thought about it for a while, we could say that Moores Law did come to an end for the silicon gate and the MOS transistor at 45 nanometers and a new Moores Law has started up around Hi-k metal gate. Now you have to look at how long will that version of Moores Law last? For us, each generation of technology represents a new set of challenges and we attack them with a lot of confidence. We have overcome them in the past, and we think we will overcome them in the future.
All those same issues are still there. The lithography issues are tough. We did not go to immersion at 45 nanometers, and we havent commented about lithography for 32 [nanometers]. We talk about 32 [nanometers] being the second generation of Hi-k metal gate transistor, so its the brand-new baby, and its still growing up and we will do a better job on the second one and the third one, and we will continue to work with that until another one of these physical barriers [comes along].