As parallel computing moves from special cases (such as image processing) into the mainstream of both desktop and enterprise IT, application developers have to start thinking less like pinball players and more like soccer coaches.
Instead of following the bouncing ball of control flow through a single-threaded virtual machine, developers need to keep track of the status and availability of multiple resources. We need to learn how to give directions that anticipate several possible results; we may want to perform speculative computations, even if their results may often be discarded, rather than merely reacting to what actually happens and passing along that single outcome to the corresponding box on a flow chart.
It helps to have language-level support for ideas of concurrency and speculation. Programming languages such as Ada, Lisp, and Java might seem to come from different worlds—large-scale embedded systems, computer science research and network-based devices, respectively—but all of them provide such structures in one form or another. Looking at the ideas that have been developed in these languages, and at the resulting effects on application design, can point us in interesting directions that we can pursue with other enterprise development tools.
Especially important is the difference between “eager” and “lazy” task creation, the former being easier to implement but far more extravagant with resources—and far more dependent on a fast, highly reliable network. Eager task creation spins off a task whenever it seems useful; lazy task creation creates a task only when there is a resource looking for something to do.
In a grid computing model, we can eventually imagine a marketplace environment where idle processors troll the network, looking for opportunities to sell their available time—bidding for the chance to offload a piece of a job from the “prime contractor” processor at an attractive price.
The drawback, though, is the high degree of coherence—everything having the same understanding of whats already known, and what tasks arent yet finished—that is needed to support a lazy task creation scheme on a widely dispersed network with separate, even if cooperating, owners. Message passing, rather than a huge shared memory, may be the better model for systems of this kind on real-world networks.
But as the price of a capable computational node plummets into the low three figures, finding ways to string cheap beads of capability into flexible strings and clusters is an opportunity too good to pass up.