Comcast's Net-Neutral Future

By Larry Seltzer  |  Posted 2008-09-25 Print this article Print

Comcast's new plan, if it's executed as advertised, is unobjectionable and better for the company. Comcast's plan states if a user is using more than a certain percentage of either upstream or downstream data capacity when overall traffic reaches a certain level, then that user's traffic is shifted to a lower priority status.

It has been weeks since Comcast announced that it would be moving its "capacity management" from a protocol-oriented scheme to one based on aggregate use of bandwidth.

The protocol/application approach was found by the FCC to violate rules of network neutrality. Comcast is appealing the ruling just to protect their rights and I think they have a valid point, but that's probably just of historical interest. After examining their new plans, it's hard to see they would prefer the old one.

Here's the short version of how it works: If, in any one physical section of the network, the overall traffic reaches a certain level, then they look at individual users. At such a time, if a user is using more than a certain percentage of either upstream or downstream data capacity then that user's traffic is shifted to a lower priority status. If the user's utilization drops below the threshold level for a sufficient period of time then they are re-prioritized.

It's generally known that cable modem is a shared service; you're on the same network segment as your neighbors, and it's on these segments that the bandwidth hogging becomes an issue. How many users are on each of these segments? The coax cable on your street is all shared. At some point, it connects to an "optical node" where it bridges into a fiber network. The fiber feed through termination hubs to a CMTS (Cable Modem Termination System).

Each CMTS has multiple ports, both upstream and downstream, and these seem to be the key point at which capacity can be constrained. On average, about 275 cable modems share the same downstream port and about 100 cable modems share the same upstream port. So it may or may not be the case that you are sharing bandwidth with your immediate physical neighbors.

Comcast has approximately 3,300 CMTSs deployed throughout their network, serving 14.4 million customers, for an average of about 4,364 customers per CMTS. The bandwidth into each of these is considerable, but so is the demand for it. The CMTSs connect further upstream into Comcast's RNRs (Regional Network Routers), and it is near these points that they are implementing the new rules, although the new equipment will work with individual ports.

Before de-prioritizing anyone's traffic, the system looks at the traffic level at individual ports. First they determine if this port is in what they call "Near Congestion State"-a state at which performance could degrade for all users on the port. The levels, based on experimentation in their test markets, are 70 percent utilization of their provisioned bandwidth on the upstream ports and 80 percent on the downstream ports (over a 15 minute period). They say they expect such levels to be reached "for relatively small portions of the day, if at all, though there is no way to forecast what will be the busiest time on a particular port on a particular day."

If these levels are reached then they look at the traffic utilization of users on that port. ("Users" in this case really means "cable modems.") The "user consumption threshold" they look for is 70 percent of either upstream or downstream bandwidth, also over a 15 minute period. (All of these levels might change over time as Comcast tunes the system.)

Larry Seltzer has been writing software for and English about computers ever since—,much to his own amazement—,he graduated from the University of Pennsylvania in 1983.

He was one of the authors of NPL and NPL-R, fourth-generation languages for microcomputers by the now-defunct DeskTop Software Corporation. (Larry is sad to find absolutely no hits on any of these +products on Google.) His work at Desktop Software included programming the UCSD p-System, a virtual machine-based operating system with portable binaries that pre-dated Java by more than 10 years.

For several years, he wrote corporate software for Mathematica Policy Research (they're still in business!) and Chase Econometrics (not so lucky) before being forcibly thrown into the consulting market. He bummed around the Philadelphia consulting and contract-programming scenes for a year or two before taking a job at NSTL (National Software Testing Labs) developing product tests and managing contract testing for the computer industry, governments and publication.

In 1991 Larry moved to Massachusetts to become Technical Director of PC Week Labs (now eWeek Labs). He moved within Ziff Davis to New York in 1994 to run testing at Windows Sources. In 1995, he became Technical Director for Internet product testing at PC Magazine and stayed there till 1998.

Since then, he has been writing for numerous other publications, including Fortune Small Business, Windows 2000 Magazine (now Windows and .NET Magazine), ZDNet and Sam Whitmore's Media Survey.

Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel