Its been argued that the leading edge of processor power has gotten too far ahead of the usability curve of enterprise software; that tomorrows most powerful machines will therefore be greeted with indifference, rather than being hailed as much-needed solutions to pressing problems.
As much as I agree with this view of most desktop and personal markets, its the wrong way to think about other strategic areas of IT. Supercomputing, if anything, is becoming more mainstream than ever, despite the setbacks suffered by some well-known builders of the most exotic machines. The onset of supercluster systems, many of them Linux-based, requires us to rethink our ideas about the supercomputer domain.
For example, I was startled by a passing comment in a year-end story about a Southern California maker of microchips. “The company started out making powerful chips for supercomputers,” wrote the Los Angeles Times, continuing: “When the market for supercomputers evaporated in the mid-1990s, Vitesse turned its attention to chips it was developing for the telecommunications industry …” Hold it right there.
I suggest that the right word is not “evaporated,” but “evolved.” From where I sit, the supercomputing market looks alive and well, fed by favorable price/performance trends and driven by relentlessly competitive markets–which force companies to explore software alternatives to traditional ways of designing and testing products, or delivering vital services.
Im not just talking about traditional buyers of extreme machines, like the Department of Defense, or about bleeding-edge applications like gene sequencing and protein folding–even if those customers and their fat checkbooks are brightening the prospects of Cray Inc., the company whose name is almost a synonym for supercomputer. In the last several years, vast amounts of computing power have also been aimed at behavioral science–with clear paths to practical application in data visualization, or in discerning customer preference by mining of our growing volumes of e-commerce data.
Even more prosaic applications come out of continuing work in computational mechanics. Tasks that range from plastic molding and sheet-metal forming to automobile crash testing are becoming less expensive and more predictable. The core work in these areas is years, even decades, old, but the affordability of adequate computing hardware is expanding its application.
In fact, of the high-performance computing sites that are tracked at www.Top500.org, almost half (as of November 2002) are categorized as industrial. Fewer than one in 10 is involved in government applications, secret or otherwise, and only one in four is considered a research installation; another one in five is in academia. The plurality of these machines have real-world jobs, a striking change from the June 1993 list (the earliest Top 500 tabulation) when the industrial installations were fewer than three out of 10.
Of course, Im fascinated by the development of programming languages aimed at the kinds of problems that are best cracked by supercomputers vector and parallel techniques: Ada, of course, and Lisp, and even more exotic tools like Orca. But lets not be like the legendary professor whose student told him, “Ive found a great thesis problem!”–who is said to have replied, “Wait! Well need a new language to describe it!”
The languages of practical supercomputing are those of time to market, standards-based competition, and return on increasingly affordable investment.