Q&A: IBM Vice President David Turek discusses what increasingly affordable supercomputing holds in store for business clients.
Supercomputing has become an important business for IBM. In recent years, the Armonk, N.Y., company has opened up a series of Deep Computing centers, where customers can access IBM compute resources and pay only for what they use.
In addition, IBM has pushed forward with its Blue Gene supercomputer, which last fall reached the number-one spot on the Top500 list of the fastest supercomputers.
IBM also is bringing Blue Gene to the commercial market, dubbing it eServer Blue Gene. David Turek, vice president of IBMs Deep Computing unit, spoke with eWEEK Senior Editor Jeffrey Burt about IBMs supercomputing business.
How would you describe IBMs supercomputing strategy?
I would describe it as entirely marketplace-focused, as opposed to technology-focused. To that extent, we spend a huge amount of time working with customers and looking at apps and speculating about the evolutions of apps and opportunities to really guide our technology decisions.
As a consequence, what you really see in the marketplace the last few years is a portfolio of technologies weve brought to bear on this marketplace.
[Regarding] what the HPC [high-performance computing] market is all about, one begins to discover that its actually composed of very distinct sub-elements that are distinctive in the sense that they have an affinity for different types of technologies, which are driven by the distinct nature of the apps that they work with. That really becomes the main thrust behind what were doing.
That doesnt subtract by any means from our focus on pursuing technology in dramatic kinds of ways, which I believe people agree they have seen with the emergence of Blue Gene and other technologies as well. But we cant disassociate those activities from each other. So, to a great extent, my daily activity is built around trying to get all different parties at IBM working different sides of this issue.
Given the commercialization of Blue Gene and how supercomputing is evolving, its apparent that supercomputing is becoming a greater presence in the enterprise.
Thats a trend that been going on for quite some time. When you go back to our business activities in the mid-1990s, I think what changed is the radical improvement in processor performance in terms of the technologies that are available in this space.
That has opened up the opportunities to use this technology for a much broader audience than what was there 15 years ago. The amount of compute power is just dramatically different.
Read more here about the deployment of supercomputer Blue Gene in IBMs Deep Computing Capacity on Demand Center.
Of course, the price for computing has come down, driven by a variety of factors. One of the principal constraints in the utilization of this technology, which has been price, is a constraint that has been under relentless attack for quite some time.
Has there been a change in demand in the enterprise that has fueled this growth?
I think those things go hand in hand. If you go back in time, you find that access to high-performance computing was in most enterprises a restricted resource that was devoted to a very few, because the cost of the resource was so high, it was targeted to a very narrow set of opportunities.
But today you can buy multiteraflop systems for a couple of million dollars, and as a result, strategic planners and a variety of other functional areas within enterprise computing will look at this and say, "You know, this is pretty accessible to me."
Whereas historically, all the buying decisions and all the perspectives were kind of at the corporate level, now you see things cascading down as low as department levels. Because the cost of computing is [lower], you start to see that a significant amount of compute power falls within the budget parameters of a lot of departments within an institution.
The release of Microsofts supercomputer, the Windows Server 2003 Compute Cluster Edition, slips to 2006. Click here to read more.
The second thing is that the passage of time has spawned multiple new generations of potential users in the marketplace. You recall, if you go back to around 1998 or 1999, there was only a very, very, very small number of companies focused on parallel computing. Parallelism was still considered a fairly untried kind of phenomenon.
That all changed in the early 1990s. As a result, there has been a generation of students coming out of graduate schools and corporate users and people in research labs whove now had a significant amount of time to explore the utility of these kinds of technologies and become very familiar with them.
The passage of time has created this pool of skill that, coupled with the declining price and the observed strategic benefits that accrue to the enterprise, create a perfectI wouldnt say storm, because that has negative connotationsbut maybe a perfect sunshiny day.
Commercial demand for supercomputing.