Threading Our Way to Future Skills

Opinion: Intel's investment seeks to accelerate next-generation coding competence.

Were coming up on the seventh anniversary of John Hennessys clarion call to coders, telling them that their skills are exhibiting a slower rate of improvement than the hardware that one might think would take longer to turn over. The Stanford University professor, a pioneer of early RISC chip design, gave a stirring keynote speech at the Microprocessor Forum in San Jose in October 1999 that I still quote to people today: In fact, I cite it with particular emphasis, since I have to admit that I had cravenly bailed on this very subject three years earlier.

Hennessy told that ballroom full of chip-heads that the instruction-level parallelism and multi-threading facilities of modern microprocessors were not well-exploited by the programming languages and techniques that most developers were using. I cant disagree: In my ZD Press Java tutorial, published in 1996, I acknowledged that Java had a built-in semantics of threads to "meet the needs of a new generation of software" in maintaining responsive user interaction while also working with remote (and perhaps high-latency) networked resources. A dozen pages later, though, I conceded that "Java does not detect [thread] deadlocks, let alone prevent them" and that using threads in interesting ways was a skill that deserved one or more books of its own. Many writers have since taken up that challenge.

Last week, Intel confronted this problem with a substantial commitment to support university computer science programs with training, hardware, software tools and curriculum material to promote the development of multithread programming skills. This is, of course, completely in Intels own interest: Intels August 7 announcement of this effort states that "By the end of this year, Intel expects more than 75 percent of its mainstream server, desktop and laptop PC processors to ship as dual-core processors; with four-, eight- and many-cores on the horizon." Without the skills to use such resources efficiently and reliably, those complex chip designs are just microscopic modern art.

I spoke about Intels program with Scott Apeland, director of the Intel Developer Network: "We see a challenge here, and we want to make it as easy as possible for the industry to make this significant change," he said. "Multi-core is probably one of the fastest architectural changes the industry has seen: We want to make it a smoother change for software developers."

Multicore has come into the mainstream rather quietly, it seems to me. When I bought my family a new Dell machine this past spring, I didnt even realize that I was getting a dual-core box until the first time I opened the Windows Task Manager and saw processor utilization line graphs for two CPUs. Developers will do well to know more, and Apeland told me that Intel has trained about 100 professors so far. "Theyre really interested in OpenMP," Apeland said, "and theyd like to learn about tools for understanding where the hot spots are, where the race conditions are, [and] how to use the APIs to make this simple.

"We think its mission-critical to prepare the next generation of programmers," Apeland continued. Ten years after admitting that multithread programming is hard, Im glad to see Intels resources being applied to that goal.

Tell me what would make concurrent coding more accessible to you at


Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.