IBM and other vendors and research organizations are working on a system that will make computing more efficient and more accessible for users.
LAS VEGAS -- IBM Corp. and other vendors and research organizations are working on a system that will make computing more efficient and more accessible for users.
The concept, known as grid computing, is hardly new. Its better known as distributed computing and has been around in one form or another for decades. But if the work that IBM and its allies are doing comes to fruition, it could dramatically reduce the complexity of network computing, Irving Wladawsky-Berger, vice president of technology and strategy in the server group at IBM said in his keynote address at the NetWorld+Interop show here Wednesday.
Grid computing involves linking numerous remote machines together and harnessing their individual processing power and storage capabilities for the good of the whole. Several small, special-purpose grids are already in use, including one used by a group of physicists to share research data and ideas.
"If something is too complex for high-energy physicists and requires them to invent a grid, it says something about the state of IT today," Wladawsky -Berger said.
He added that the productivity gains and efficiencies promised by the advent of the Internet have yet to be realized, but could materialize quickly if and when grid computing becomes widespread.
"We fell in love with the technology and we saw over the next few years it really turned into this tremendous hype," he said. "The real productivity happens when you start integrating all the processes and start to have end-to-end automation."
The efforts of grid computings proponents have already produced a plan called the Open Grid Services Architecture, which Wladawsky-Berger says will be the key to grid computing gaining widespread acceptance in the enterprise world. The architecture utilizes open standards and protocols such as SOAP, XML and WSDL and ultimately will enable participants to build a network capable of self-management.
Such a system would be able to automatically route traffic around bottlenecks or machines that have crashed, detect and counter malicious attacks and perform other tasks that today require human intervention.
"Where were going over time is to make it as easy as possible for businesses to decide how to deploy services," Wladawsky-Berger said.