Disaster Jumpstarts Network Reevaluation

 
 
By Peter Coffee  |  Posted 2005-09-20 Email Print this article Print
 
 
 
 
 
 
 

News Analysis: Throughput must keep pace with surging, unpredictable data demands.

The ideal network is invisible: bits go in at one access point and come out at another, in zero time with zero error and with zero administrative workload. All those ideals are challenged, though, by the exploding workloads, surging performance requirements and unpredictable usage models arising from acts of man and nature alike.

Anyone whos operating a network today, or building one for tomorrow, needs to investigate these edge conditions—and beyond—to avoid costly disappointment and potentially disastrous surprise.

The rate at which data is being produced and the ease with which data can be stored are vastly outpacing the speed with which data can be transferred from source to repository.

High on the list of potential data eruptions is the proliferation of RFID (radio-frequency identification) tags and other wireless sensors, with attractive applications for manufacturing, retail, health care, public safety and business intelligence tasks—and likely many others yet to be recognized.

Storage devices, meanwhile, defy doomsayers predictions of imminent collision with physical limits, continually breaking new ground in both absolute performance and cost-effectiveness.

Network throughput, however, has grown at a far more leisurely rate. If the typical client device was a 300-bps dial-up modem in 1980 and is a 600K-bps DSL connection in 2005, then throughput has grown at a compound rate of only 35 percent per year.

Desktop CPU speed, meanwhile, has grown roughly 70 percent per year, and desktop storage has grown 90 percent per year. Its harder to quantify this imbalance in enterprise systems, with their greater diversity of scale and variety of technology, but the overall situation is much the same.

Worse still is the escalation of peak-to-average ratios—or the "burstiness" of network traffic, to use less-formal language—as the mix of network transactions shifts from steady streams of terminal traffic to user- and event-driven spikes of multimedia content.

Network throughput can fall short of demand in contexts as frivolous as the 1999 Victorias Secret online fashion show or as critical as the sudden need for geotechnical data and earthen dam failure simulation results in response to Hurricane Katrina.

Hurricane Katrinas wrath demonstrated how far technology has come when it comes to weather, and how far it needs to go to track resources. Click here to read more. Indeed, the volumes of data required by a large-scale disaster response—for example, after a major California earthquake—would be so great that theyd be moved faster by overnight delivery of physical media than they can currently be transferred over a network, according to Anke Kamrath, user services and development division director at the San Diego Supercomputer Center.

At the end of the road, moreover, is the fundamental light-speed limit—actually, a fraction of the ultimate speed of light when signals move through anything denser than a vacuum—that is rapidly becoming a sizable component of overall latency in distributed processing and remote Web services invocations.

Kamrath told eWEEK Labs this month that algorithm developers who focus today on memory bandwidth limits can hope that new technology will alleviate their problems, but "the speed of light, from San Diego to Illinois or the East Coast, is not going to change—its going to become the wart," she warned.

What all this means is that networks will have to emerge from their idealized invisibility to become a much more prominent item on developers and users agendas. Developers will need to know more about network protocols and the implications of tuning transfer parameters to match the needs of a particular application. Developers will have to be more aware of the actual locations where data resides and where services are executed, as well as the paths that data and service invocations follow.

System managers will have to consider all these things as part of their planning to maintain quality of service despite both random fluctuations in workload and disastrous incidents of downtime. In addition, business-process owners will have to consider network factors, balancing the costs and uncertainties of long-distance transfers against the efficiencies of service-oriented architecture as they make their future plans.

Technology Editor Peter Coffee can be reached at peter_coffee@ziffdavis.com.

Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.
 
 
 
 
Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel