News Analysis: Throughput must keep pace with surging, unpredictable data demands.
The ideal network is invisible: bits go in at one access point and come out at another, in zero time with zero error and with zero administrative workload. All those ideals are challenged, though, by the exploding workloads, surging performance requirements and unpredictable usage models arising from acts of man and nature alike.
Anyone whos operating a network today, or building one for tomorrow, needs to investigate these edge conditionsand beyondto avoid costly disappointment and potentially disastrous surprise.
The rate at which data is being produced and the ease with which data can be stored are vastly outpacing the speed with which data can be transferred from source to repository.
High on the list of potential data eruptions is the proliferation of RFID (radio-frequency identification) tags and other wireless sensors, with attractive applications for manufacturing, retail, health care, public safety and business intelligence tasksand likely many others yet to be recognized.
Storage devices, meanwhile, defy doomsayers predictions of imminent collision with physical limits, continually breaking new ground in both absolute performance and cost-effectiveness.
Network throughput, however, has grown at a far more leisurely rate. If the typical client device was a 300-bps dial-up modem in 1980 and is a 600K-bps DSL connection in 2005, then throughput has grown at a compound rate of only 35 percent per year.
Desktop CPU speed, meanwhile, has grown roughly 70 percent per year, and desktop storage has grown 90 percent per year. Its harder to quantify this imbalance in enterprise systems, with their greater diversity of scale and variety of technology, but the overall situation is much the same.
Worse still is the escalation of peak-to-average ratiosor the "burstiness" of network traffic, to use less-formal languageas the mix of network transactions shifts from steady streams of terminal traffic to user- and event-driven spikes of multimedia content.
Network throughput can fall short of demand in contexts as frivolous as the 1999 Victorias Secret online fashion show or as critical as the sudden need for geotechnical data and earthen dam failure simulation results in response to Hurricane Katrina.
Hurricane Katrinas wrath demonstrated how far technology has come when it comes to weather, and how far it needs to go to track resources. Click here to read more.
Indeed, the volumes of data required by a large-scale disaster responsefor example, after a major California earthquakewould be so great that theyd be moved faster by overnight delivery of physical media than they can currently be transferred over a network, according to Anke Kamrath, user services and development division director at the San Diego Supercomputer Center.
At the end of the road, moreover, is the fundamental light-speed limitactually, a fraction of the ultimate speed of light when signals move through anything denser than a vacuumthat is rapidly becoming a sizable component of overall latency in distributed processing and remote Web services invocations.
Kamrath told eWEEK Labs this month that algorithm developers who focus today on memory bandwidth limits can hope that new technology will alleviate their problems, but "the speed of light, from San Diego to Illinois or the East Coast, is not going to changeits going to become the wart," she warned.
What all this means is that networks will have to emerge from their idealized invisibility to become a much more prominent item on developers and users agendas. Developers will need to know more about network protocols and the implications of tuning transfer parameters to match the needs of a particular application. Developers will have to be more aware of the actual locations where data resides and where services are executed, as well as the paths that data and service invocations follow.
System managers will have to consider all these things as part of their planning to maintain quality of service despite both random fluctuations in workload and disastrous incidents of downtime. In addition, business-process owners will have to consider network factors, balancing the costs and uncertainties of long-distance transfers against the efficiencies of service-oriented architecture as they make their future plans.
Technology Editor Peter Coffee can be reached at firstname.lastname@example.org.
Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.