PCI Express

By eweek  |  Posted 2003-09-18 Print this article Print

PCI Express appears to be one of those standards Intel is determined to run across multiple platforms. Can you talk about that? PCI … is running out of gas. From a technologists perspective, you cant push it a lot further. PCI Express is this new serial interface technology that says that Im communicating to my nearest neighbor in more of a daisy-chain fashion and that allows us to go much, much faster. We expect that over the life of PCI Express, were going to exceed 10 giga-transfers per second from the initial 2½-giga-transfers-per-second rate. And we expect that this will be as profound, if not more so, than PCI was when we did that in the early 90s.
To put that in context, when we did PCI, we thought about it as a chip-to-chip interface technology, and then it grew up into a bus. In this case [with PCI Express], we see it as a bus as well as how we connect our chips together. But its not just for computing—for servers and desktops and mobile [devices]—but were making that a fundamental element of our communications platform as well. So we expect the breadth of PCI Express to have an even greater impact on a broader set of industries than even PCI was. And youll see every chip set, every CPU that we build for everything down from handset devices all the way up to big, honking Itanium servers, all utilizing a commonly architected interconnect technology in PCI Express.
You expect to see it take off by the end of next year? You see the first products here [at IDF]. Theres a real chip set that will be going into production with it. … Those are early beta versions of the real products that are going into production, so youll start to see some of them showing up late this year. But next year is when youll really see the first wave of PCI Express products. I expect that … in 05 theyll be the avalanche of PCI Express. Next year therell be a lot of early products, but there are still a lot of PCI-X things going on. But in 05, therell just be an avalanche of products across all segments. Throughout the event this week, Intel has talked about Hyper-Threading, multithreading, dual-core processing in its chips. Why is this important to enterprises? Theres two ways to consider that. … We started back in the 386 and 486 with what we called instruction-level parallelism, having the processor do more than one instruction at a time. Thats what pipelining and super-pipelining, super-scaler, out-of-order super-scaler pipelining, all of that kind of stuff was about. But youre still doing instruction-level parallelism—taking a thing and [asking] how can I make two, three, four, five instructions operate at once? Threading is the next abstraction. What we see is were running out of gas with instruction-level parallelism. Theres still more there, but its becoming diminishingly cost-effective for me to get to the next level of parallelism at the instruction level. Thread-level parallelism is jumping up to the next paradigm. Rather than looking at single instructions, Im looking at basic blocks and how can I execute those basic blocks in parallel. So I start executing multiple threads. Microsoft put threading into the operating system starting back with NT, so thats gotten fairly mature. In the server supercomputer space, theyve been doing threading and threading tools for a couple of decades, so that software structure is fairly mature. Now what we have is this great opportunity where I now have a big enough transistor budget—Ive got a 90- and 65-nanometer—I can really start putting not just little hooks for threading, which is what Hyper- Threading was. It took a very sophisticated, out-of-order core and said, "If I parallelize just a few resources … you fake the software into thinking that Ive got two threads of execution, and in lots of cases I got benefit from that, 20, 30 percent." What multicore says [is] not only can I do that, but I now have enough transistors [so] that I can start putting two cores in place. … Tulsa [a future dual-core Xeon chip] has two Hyper-Threaded cores, so from the software perspective, it looks like four threads of execution. From a software perspective, you start seeing a lot of threads being able to execute in parallel. Paul [Otellini] also said that were going to have that in both desktop and mobile products as well. … Just like weve had this rapid progression over the last decade in instruction-level parallelism, youre going to see the same thing with threads and multiple cores: Were going to have Hyper-Threading, were going to have multiple cores, were going to have speculative multiple cores, were going to have out-of-order, speculative multiple cores, just a very, very rich set of innovations happening for thread-level parallelism. This year we touched on it with Tanglewood [a future Itanium chip with more than two cores]. Two cores isnt the limit. The numbers going to be a lot bigger than that as we look forward. This is the foundation of how were going to be building computers and microprocessors for the next decade, decade-and-a-half. Next page: How Asia and Eastern Europe impact decisions.


Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel