Supercomputing has become an important business for IBM. In recent years, the Armonk, N.Y., company has opened up a series of Deep Computing centers, where customers can access IBM compute resources and pay only for what they use.
In addition, IBM has pushed forward with its Blue Gene supercomputer, which last fall reached the number-one spot on the Top500 list of the fastest supercomputers.
IBM also is bringing Blue Gene to the commercial market, dubbing it eServer Blue Gene. David Turek, vice president of IBMs Deep Computing unit, spoke with eWEEK Senior Editor Jeffrey Burt about IBMs supercomputing business.
How would you describe IBMs supercomputing strategy?
I would describe it as entirely marketplace-focused, as opposed to technology-focused. To that extent, we spend a huge amount of time working with customers and looking at apps and speculating about the evolutions of apps and opportunities to really guide our technology decisions.
As a consequence, what you really see in the marketplace the last few years is a portfolio of technologies weve brought to bear on this marketplace. …
[Regarding] what the HPC [high-performance computing] market is all about, one begins to discover that its actually composed of very distinct sub-elements that are distinctive in the sense that they have an affinity for different types of technologies, which are driven by the distinct nature of the apps that they work with. That really becomes the main thrust behind what were doing.
That doesnt subtract by any means from our focus on pursuing technology in dramatic kinds of ways, which I believe people agree they have seen with the emergence of Blue Gene and other technologies as well. But we cant disassociate those activities from each other. So, to a great extent, my daily activity is built around trying to get all different parties at IBM working different sides of this issue.
Given the commercialization of Blue Gene and how supercomputing is evolving, its apparent that supercomputing is becoming a greater presence in the enterprise.
Thats a trend that been going on for quite some time. When you go back to our business activities in the mid-1990s, I think what changed is the radical improvement in processor performance in terms of the technologies that are available in this space.
That has opened up the opportunities to use this technology for a much broader audience than what was there 15 years ago. The amount of compute power is just dramatically different.
Of course, the price for computing has come down, driven by a variety of factors. One of the principal constraints in the utilization of this technology, which has been price, is a constraint that has been under relentless attack for quite some time.
Has there been a change in demand in the enterprise that has fueled this growth?
I think those things go hand in hand. If you go back in time, you find that access to high-performance computing was in most enterprises a restricted resource that was devoted to a very few, because the cost of the resource was so high, it was targeted to a very narrow set of opportunities.
But today you can buy multiteraflop systems for a couple of million dollars, and as a result, strategic planners and a variety of other functional areas within enterprise computing will look at this and say, “You know, this is pretty accessible to me.”
Whereas historically, all the buying decisions and all the perspectives were kind of at the corporate level, now you see things cascading down as low as department levels. Because the cost of computing is [lower], you start to see that a significant amount of compute power falls within the budget parameters of a lot of departments within an institution.
The second thing is that the passage of time has spawned multiple new generations of potential users in the marketplace. You recall, if you go back to around 1998 or 1999, there was only a very, very, very small number of companies focused on parallel computing. Parallelism was still considered a fairly untried kind of phenomenon.
That all changed in the early 1990s. As a result, there has been a generation of students coming out of graduate schools and corporate users and people in research labs whove now had a significant amount of time to explore the utility of these kinds of technologies and become very familiar with them.
The passage of time has created this pool of skill that, coupled with the declining price and the observed strategic benefits that accrue to the enterprise, create a perfect—I wouldnt say storm, because that has negative connotations—but maybe a perfect sunshiny day.
Commercial Demand for Supercomputing
Regarding the commercialization of Blue Gene, can you talk about the demand and where its coming from?
Interest has been very high. We have a Blue Gene Consortium which encompasses today about 45 institutions, academic research labs and partners. Awareness is extraordinarily high around the world and demand is significant as well.
Deployment in the early stages follows the classic pattern of deployment of innovative technologies, which is, youll see a lot of groundbreaking work done in government labs, research laboratories and universities, and as value is demonstrated at those types of institutions, itll carry over quickly into the more aggressive players in the industrial sector.
That model indicates that Blue Gene is refined by virtue of the fact that [it] is a very purpose-built kind of machine. Its design was never to be universal in terms of its affinity for every possible high-performance computing application.
What were working through right now with a lot of our customers, partners, and technology partners and developers is to get very much a deeper understanding of where the true utility of Blue Gene sits, because, as I outlined at the outset, our play in the marketplace is really predicated on a portfolio of technologies and what we dont want to do is step all over ourselves.
The reason this is important is that the principal value proposition of Blue Gene is the proposition of ultrascalability. As a consequence, what people are really looking at is the scalability of their algorithms. Are the scaling attributes of Blue Gene such that it would motivate one to redesign an algorithm, for example, with the constraints of scale that you see in that other architecture removed?
That kind of yin and yang is going to go on for a bit of time, but in the meantime were deploying them essentially at max capacity and we expect that to continue over the next several years.
When you announced the commercialization of Blue Gene last year, you spoke about expanding the portfolio of products based on it. Can you talk about your plans in that area?
There are a couple of ways to think about this. One is whether or not there are variations on the Blue Gene designs that can be entertained.
The second thing is, are there technologies, design approaches or other things coming out of Blue Gene that ought to be reflected in other parts of our product line, regardless of what the sources of the underlying technology concern? Work goes on both of those fronts on a continuing basis.
The other thing is that it is becoming more powerful an issue for customers to consider the physical space of computing with the cost of operating and cooling these systems as well. So the Blue Gene design, for example, has a roughly seven to eight times advantage over other conventional architectures with respect to the consumption of electricity. …
This is a serious issue, because if you look at not only the cost of electricity around the world—which varies dramatically—but if you also look at the availability of electricity around the world—that is, the fact that you might not have it 24 hours a day—one needs to begin to factor in this phenomena into the design of systems pretty directly.
One of the consequences of this is that the classic rules of thumb that people have used in the past to compare systems or to assess the progress of computer design in the space are going to have to be modified to more appropriately reflect these kinds of attributes, so that instead of talking about dollars per megaflop, for example, that will probably become dollars per megaflop per kilowatt as a appropriate kind of metric.
It reflects the very common-sense view that you simply cannot afford to buy something that consumes a disproportionate amount of resources to operate. As we look at our dense models of computing that revolve not only around Blue Gene but blade-based approaches to computing, these kinds of lessons are being factored in.
Blue Genes Influence on
Other IBM Product Lines”>
How do you see Blue Gene influencing other product lines at IBM?
This is a pretty speculative proposition in the sense of what it means in terms of products. In terms of design principles, that part is clear. We will relentlessly pursue efficient operation … and scalability.
When we talk about scalability, its not just the routine mention of the number of servers or processors in some particular domain, but actually its a comment about how we expect algorithms to perform, applications to perform, software in general to perform in these systems.
The other side of this is that were beginning to see in the marketplace an interest on the part of customers in what Id call the hybridization of computing models, in the sense that as you start to deconstruct our portfolio and you make the observations that we have big SMPs [shared memory processors] built on Power and we have concentrated footprint kinds of servers built out of [Advanced Micro Devices Inc.s] Opteron and Intel [Corp.] and ultrascaling kinds of architectures like Blue Gene, the game always comes back to the application at hand.
What were seeing is that there is no universal architecture for all possible applications. As a consequence, there is a greater and greater desire on the part of customers to acquire multiple technologies and merge them. That really is a reflection of the heterogeneity in their own application portfolio domains.
Right now, thats the situation where one might deploy Blue Gene coupled with an xSeries cluster running your Opteron or Intel, where work streams are simply allocated as appropriate for the right platform for the element of the application that really requires it.
Over the course of time were going to have to look at this very carefully and see whether architectures in general should become more multifaceted in their capability. Its sort of the counter argument to a company that might choose to compete by building a purpose-built machine.
By definition, if you build a purpose-driven machine, youve kind of said to your customers, “The rest of the problem is yours to figure out.” … Our view is that since we value the nature of a client-led relationship, we cant leave customers in the lurch like that. We have to do the best we can to build the composite set of technologies to solve the overall problems that the client faces.
To a certain extent, thats why our focus on this area is not limited to what were doing on the server side. Earlier this year, we launched a solutions effort called Deep Computing visualization, because our observation was that visualization was becoming a progressively core central theme.
While this conversation has really been focused on technologies, the other part of the conversation has to do with our business consulting services, our hosting service, and a variety of other things that look not solely at an application in its conical form. …
Youve also got to look at the financial circumstances or even the sociological circumstances that characterize the behavior of the enterprise and youve got to say, “All right, the conventional model of build-a-box, sell-a-box may not be the universal panacea that everybody needs. Maybe there are on-demand kinds of solutions that can be provided or hosting solutions or combinations of those as well.”
So its a dramatically complex environment that I think a lot of people over the past couple of years have dismissed in terms of its gross complexity by virtue of the headlines some of the really big deals have garnered. We just see this as an extraordinary, rich marketplace that gives rise to creativity on many, many dimensions that go beyond just pure technology.
Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.