Intel CTO Rattner Talks
Nehalem, Virtualization”> SAN FRANCISCO—Although Intel will launch its 45-nanometer Penryn processor on Nov. 12, most of the talk of the 2007 Intel Developer Forum here Sept. 18-20 focused on the companys new microarchitecture called Nehalem, which is due in 2008. Justin Rattner, the Santa Clara, Calif., companys chief technology officer and an Intel senior fellow, talked with eWEEK Staff Writer Scott Ferguson about Nehalem and the companys road map.
Intels 45-nanometer Penryn family of processors is due out in November, but much of the talk at this years IDF focused on Nehalem. What can you tell us about the new architecture and what it means for Intel?
Nehalem is a significant evolution in our multicore strategy. We moved to a much more flexible and versatile underlying architecture, and that will let us offer Nehalem technology in a variety of different arrangements.
I think [Intel CEO] Paul [Otellini] alluded to that in his address. It will have one core to eight cores, and we never had that kind of flexibility, which means that developers and users will have access to the Nehalem technology up and down the product line in ways that we havent been able to do in the past.
Click here to read more about what Paul Otellini said at IDF.
Nehalem continues our improvements at the microarchitecture level and sort of builds on the momentum around Core and Core 2 architecture, so it continues to improve single-processor performance and improves energy efficiency [performance]. Its much more versatile from a power management point of view, and it will have new models that will take it very quickly in and out of low-power states. I guess from a scalability, energy efficiency point of view, its really a next-generation microprocessor. All of these dimensions are what we will be focusing on in the future.
One of the most notable additions to Nehalem is its QuickPath architecture that includes an integrated memory controller. Can you tell us more about that and how it compares to Advanced Micro Devices DirectConnect architecture?
In the spring of last year, Steve Pawlowski [an Intel senior fellow] gave a talk about tech insight and that question of an integrated memory controller came up, and I think Steve said accurately that we were perfectly aware of integrated memory controllers and we [Intel] had developed integrated memory controllers and we have killed more designs with integrated memory control than AMD has built, so our view was that integrated memory control was not required to deliver high performance.
I think the press began equating integrated memory control with high performance, and I think with Core 2 we retained the front-side bus architecture and we beat the pants off of everybody without the integrated memory controller. The argument was that it was an engineering decision and at some point it will make sense to integrate the memory controller, and when that happens we will. We looked at that trade-off with Nehalem and we decided for that generation it made sense.
The analogy I use is to look at the copper microprocessor used in [IBMs] PowerPC. They never delivered a microprocessor that even ran at half the clock speed of the then-available Intel processor. So if copper was so almighty important, why didnt it deliver? The metallization technology was not the determining factor of chip performance at that time. The transistors needed to get faster before the wires got faster. IBM made the wire faster but not the transistors, so it was like having a Formula One car between stoplights. We have copper technology and we worked on it for many years, and then we decided that the transistors were running fast enough to make the wires run faster. We then put copper into production, but we didnt deploy it until we needed it.
What are some of the technical challenges for Intel as its chips move from 65-nanometer to 45-nanometer and ultimately down to 32-nanometer?
I tend to think that 45[-nanometer] is the real significant jump. I think that a lot of people talk about the end of Moores Law, and we had Gordon Moore here [Sept. 18 to say Moores Law will end sometime], but we completely changed the transistor architecture at 45 nanometers. We abandoned 40 years of development with the silicon gate MOS transistor and replaced the silicon in the gate and replaced the silicon dioxide in the insulating layer and yet stayed right on schedule in terms of brining this technology to market.
So, talk about the challenge of shrinking line widths. It was hard to abandon 40-year-old transistor architecture for a completely new one. I think that doesnt get enough attention.
If we thought about it for a while, we could say that Moores Law did come to an end for the silicon gate and the MOS transistor at 45 nanometers and a new Moores Law has started up around Hi-k metal gate. Now you have to look at how long will that version of Moores Law last? For us, each generation of technology represents a new set of challenges and we attack them with a lot of confidence. We have overcome them in the past, and we think we will overcome them in the future.
All those same issues are still there. The lithography issues are tough. We did not go to immersion at 45 nanometers, and we havent commented about lithography for 32 [nanometers]. We talk about 32 [nanometers] being the second generation of Hi-k metal gate transistor, so its the brand-new baby, and its still growing up and we will do a better job on the second one and the third one, and we will continue to work with that until another one of these physical barriers [comes along].
Page 2: Intel CTO Rattner Talks Nehalem, Virtualization
Intel Architecture Alive and
Well”>
Where does the company see the Intel Architecture going in the future?
We certainly think that its alive and well, and it we want to take it down. This was Pauls 10X challenge. Can we take it down 10X in power, and can we take it up 10X in performance? That is “Larrabee.” We beat the 10X down in power two years early, and that is the “Menlow” [ultraportable] platform, which will be in production next year, with the “Silverthorne” processor, which is one-tenth the power of our ultra-low voltage processor. I think a lot of people doubted that we could take Intel Architecture down into the sub-1-watt range, and “Moorestown” [platform] continues that trend. We will probably take active power down by another factor of two and make similar improvements in idle power and [power] leakage.
At IDF, Intel lays out its enterprise plans. Read about them here.
So we are taking it down. We are taking the full architecture down with low-power IA Silverthorne, which is based on what we call the Bonnell core, which is a complete implementation of the architecture right down to building virtualization in it and all the other gee-whiz features like SSE [Streaming Simd Extensions] processing. If you know about it, it is in the Intel Architecture; its in Silverthorne. So going down in power does not mean losing features or giving up compatibility. Then we intend to go up by taking the Core architecture and enhancing it in the area of floating-point performance, which is required for these high-throughput applications that include, but is not limited to, graphics.
Its still the IA architecture that you know and love with the full support of all the Intel tools and third-party tools for it and all the industry knowledge around IA. Now, its going up, if not way up, in terms of crunch power, and we are positioning IA to tackle this new range of application that we have been talking about for the last few years, such as the tera-scale applications. We think we can do it all while retaining strong architecture compatibility; strong user, programmer and developer familiarity; as well as a customer confidence that this is an architecture that they know and trust and expect to be around for a long time.
What is Intel doing to build virtualization onto the processor?
We spent a lot of time and research on the virtualization topic. I think that perhaps we didnt focus on what the industry is doing with it. We were trying to create a more secure computing environment within the chip, and this notion of using virtualization to do that struck as quite novel and interesting, so that you could build independent virtual machines that each define their level of security. You could have open virtual machines and closed virtual machines and really control the flow of information between virtual machines.
A lot of what the industry is doing, in particular on the server side, is consolidation. People are saying that I can have so many copies of [Microsoft] Windows or I can have a mixed Windows and Linux environment and I can put each one of those in its own virtual machine. I can also dynamically provision each one of the virtual machines so that if they crash, I can reboot individual virtual machines without bringing down the system.
On the client side, you have things like VMware Fusion and Parallels for the [Apple] Macintosh, where you can switch between Windows and [Mac] OS 10 instantly, and now thats to the point where you dont even think youre switching but that its like you just have different windows and those windows represent applications running with different underlying operating systems.
We think that virtualization is a great, powerful technology that is at its earliest stage of its application. Wed now like to get back to the security application, which is where we started. We are just bringing out this trusted execution technology, which we used to call “LaGrande,” which guarantees that the image you load in the virtual machine is the image that you wanted to load. So we have this notion of secure boot, and wed like to get back to that. We have focused a lot on processor virtualization, and theres a lot of work to be done on platform virtualization. How do we virtualize the platform so we have to do less in software and do more in the hardware? One of the things that we are doing with the USB 3.0 development is to develop the changes in USB architecture to support virtualization and then you have to look at what we are doing with PCI Express and USB and graphics. We have an eye toward supporting virtualized environments.
Check out eWEEK.coms for the latest news in desktop and notebook computing.