Imagine being a high-level executive trying to close a huge deal after hours from home using high definition video conferencing and your so-called high speed Internet connection, only to find congestion on the link makes communications impossible.
Or imagine finding that your formerly reliable broadband Internet connection frequently slows down to the point that you are rarely able to download the latest hot YouTube video before you give up in frustration.
Those scenarios are likely to happen in different locations and at different times on the Internet as early as 2010, if the conclusions of a Nemertes Research study come true.
Nemertes Research late last year stirred up the pot on a long-simmering debate in the networking industry over whether capacity upgrades to different parts of the collective Internet will keep up with increasing demand.
That debate spilled over into the new year as more players weighed in with their responses to the study’s conclusion that, at least in North America, increases in bandwidth demand at the access portion, or so-called last mile of the Internet will exceed capacity starting in 2010.
The study, “The Internet Singularity Delayed; Why Limits in Internet Capacity Will Stifle Innovation on the Web,” separately assessed infrastructure investments planned by service providers as well as projected traffic patterns. It found that exponential bandwidth demand growth driven by video, peer-to-peer file transfer and other Web content will exceed planned capacity upgrades at the access portion of the Internet, rather than the core.
While investments in the core or backbone of the Internet will keep pace, the study concluded that in two years the investment required to close the gap between increasing bandwidth demand and access capacity would be $43 billion.
Access is essentially the high speed broadband or DSL connectivity that many residential and some businesses users rely on to access the Internet. It does not include leased lines or network services that guarantee a specified data rate typically used by medium and large enterprises.
Not all industry players agree with Nemertes’ forecast.
“Because their business is the -Net, it’s in the service providers own interest to get ahead of this trend,” argued Doug Webster, director of marketing for service providers at Cisco Systems in Austin, Texas. “We’re seeing a number of service providers doing that. AT&T is doing a large core build out. Savvis is doing the same thing. XO Communications is doing that. We’ve seen a trend among service providers – not just the largest, but emerging providers and those in emerging markets – they are getting ahead of it,” he said.
Study Predicts Internet Users Face Bandwidth Drought – Page 2
Verizon is probably the best example of a large service provider working to get ahead of the demand curve at the network’s edge. In its FIOS (Fiber Optic Service) project, the carrier is spending billions of dollars to upgrade its network to provide fiber to the home that is capable of supporting at least 50 megabits per second (Mbps) data rates.
“We believe pushing fiber to the home is the right approach. We did see the writing on the wall as this report suggests,” said Stuart Elby, vice president of network architecture for Verizon in Basking Ridge, N.J.
Elby believes the Nemertes study is a “vindication” of Verizon’s multibillion FiOS project. “We caught quite a bit of heat for that from Wall Street. Now we think other large carriers will have to follow. But it will be much harder to make this investment four or five years from now,” he said.
Verizon’s approach in building a passive optical network all the way to the home is different than competitors such as AT&T, which in its U-verse project is employing either fiber to the premises or fiber to the nearest network node and then using existing twisted copper wire to the home.
Dave Passmore, research director at the Burton Group, believes that “most of the fiber networks going in now for local access are based on passive optical networking,” and that approach will make it easier to insure that capacity can meet demand.
“The biggest cost of access is the actual wire in the ground – the cable plant. Once you put fiber in place, speed is essentially unlimited,” said Passmore in Sterling, Va. “You can upgrade the endpoint electronics without having to swap out switches and routers in the middle, so you can continue to create more bandwidth out of fiber almost indefinitely. All that suggests providers will ultimately keep up with demand,” he said.
Verizon, in fact, on Jan. 8 announced that it has upgraded its Passive Optical Network to Gbps speeds in locations in California, Massachusetts, Maryland, Texas, New Jersey, New York, Rhode Island, Pennsylvania and Virginia.
But that approach assumes that service providers will continue to use the same all-you-can-eat pricing model that they’ve employed to date. And while Cisco competitor Juniper Networks agrees that service providers will keep up with increasing bandwidth demands at the Internet’s edge, they will accomplish that by adding more intelligence into the network, according to Ravi Medikonda, director of wireline service provider marketing with Juniper in Sunnyvale, Calif.
Study Predicts Internet Users Face Bandwidth Drought – Page 3
“Service provider revenues are stagnating because broadband penetration [is reaching a saturation point]. So the model of upgrading the network will change,” said Medikonda. “They will depend on the revenue sharing business model, and the network architecture will change from building a big, all-you-can-eat network to a network that has two different attributes – identity management, and policy and control,” he added.
The revenue-sharing model to date has been used by Google and Yahoo, which have paid partners such as AOL, Earthlink and others as a channel to get to the end subscriber. Google for example, generated gross revenues of $3.6 billion in its first calendar quarter of 2007, but paid out $1.1 billion to thousands of partners such as AOL, Ask Jeeves, Earthlink and HowStuffWorks.
In that model the key partners missing are the service providers such as AT&T, British Telecom and others. Those providers, Medikonda predicted, “will squeeze companies like Google in the revenue-sharing supply chain, because in the next three to five years the Internet will get to a congested mode where companies like Google [will help to fund network upgrades] so the end user won’t have a bad experience.”
Juniper, of course, has created federated identity management techniques that allow the network to identify the user and then use a policy and control system to create policies on demand based on the applications the user is accessing.
“If a video on demand provider wants a greater experience to the user, it can signal to the policy and control system and on demand it changes the bandwidth for the particular session to create a better user experience,” described.
Both Juniper and rival Cisco agree that more intelligence is needed in the network to apply Quality of Service control to latency-sensitive applications, and Medikonda asserted that “identity management is the next wave of investment service providers are looking at.”
But Burton Group’s Passmore believes that such an approach is more costly than just increasing the size of the network pipe.
“It’s cheaper to deploy more bandwidth than to turn on QOS and violate net neutrality. Forty percent of the cost of your phone bill is the phone bill. What that says is going to a flat rate and throwing bandwidth at the problem is much more effective than trying to meter things.”
Study Predicts Internet Users Face Bandwidth Drought – Page 4
And then there’s the -Net neutrality debate, which in the end in at least the United States could prohibit the application of QOS controls that would throttle data rates for some applications to favor others.
That debate to date has muddied the waters for many service providers, who are unsure about whether they will be allowed to recoup any investment they make to beef up bandwidth at the Internet’s on-ramp.
“All those public policy issues have gummed up the pure business decision about how to invest in the access area,” claimed Mike Jude, author of the Nemertes study and senior analyst at Nemertes Research in Denver, Colo. “I think the public policy debate is potentially retarding investment in that area.”
How that debate plays out could ultimately decide whether an Internet bandwidth crunch happens. Juniper’s Medikonda believes the great -Net neutrality debate will fade away, and in three to five years “revenue sharing will be more practically used and deployed.”
Nemertes is quick to point out that if demand exceeds capacity, it won’t break the Internet. “It’s simply not possible. The Internet’s fundamental architecture precludes it,” the study concluded.
Instead, “What you’ll have is some places with local congestion at certain times of the day,” described Passmore.
If that occurs, then consumer expectations will likely change, affecting demand. “We have an expectation of entitlement to free stuff [from the Internet]. But as a user, I’m only going to download videos until it’s an unfavorable experience,” argued Tom Nolle, president of CIMI Corp. in Voorhees, N.J.
The United States, in fact, is one of the few countries that still employs “all-you-can-eat pricing” rather than usage-based pricing, which leads to unrealistic expectations, Nolle asserted.
“We might all like to download an HD movie to watch in an hour, but that’s not going to happen,” he said.