Skepticism about whether a “spectrum commons” could work most likely springs from the way weve been trained to think about “spectrum.” A hundred years of careless talk has led many to think spectrum is a thing. Worse, a hundred years of careless talk has led most people to think that when radios suffer “interference” it is because the radio waves have, in some sense, collided. Both notions are simply wrong. There is no such thing as “spectrum” that gets “used” the way a pasture gets used. Spectrum is not a thing. And what we think of as “interference” is not an issue of radio waves; its an issue in the receiver. Clarifying these two misconceptions will go a long way toward a greater understanding of a spectrum commons.
Think about a conversation in the middle of a party. Lots of people are talking around you; there could be plenty of other noise in the room. The TV could be blaring or a police siren could be wailing outside. But despite all this “noise” you are still quite likely able to carry on a conversation. And thats because (1) humans are intelligent receivers and (2) sound waves (like radio waves) dont collide and fall to the ground; they essentially pass through each other without any “damage.” Even if two people utter exactly the same sounds at exactly the same moment, people can hear the two speakers and distinguish their messages.
Older radio technology would not produce such efficiency in an analogous situation. If two transmitters tried to transmit on the same channel at the same time, then a receiver would not report two different transmissions. The receiver would instead report interference, but not because of any flaw in the radio transmissions. The flaw is in the receivers.
For most of our history, radio receivers were “stupid.” They distinguished one transmission from another by picking out the stronger of the two. If two transmissions were on the same wavelength, then the receiver wouldnt know which transmission to focus on. That confusion is what we hear as “interference.”
Progress in radio technology has made radios smarter; new technologies for transmitting radio signals effectively allow two transmitters to use the same channels at the same time. There are too many of these technologies for me to describe comprehensively. But consider spread spectrum technology as just one example.
The idea behind spread spectrum technology is very similar to the idea behind the Internet. “Data” is broken into chunks that are then transmitted on many different frequencies at essentially the same time. Each chunk is marked with a code that the receiver is able to detect. The receiver listens for all of the transmissions with that code and then collects them to “receive” the message.
Fast computers make it possible to scramble data like this and receive it intact. Assuming the processing capacity, spreading the transmission this way allows many different users to use the same swath of spectrum to transmit content at the same time. And because the receiver knows what it is listening for, the transmission need not be so powerful as to drown out every other transmission. Instead, like two people on opposite sides of a football stadium who have agreed on a common code of signals for communicating with a flashlight, small bursts of data are enough to get large amounts of content across any distance.
The Value of Cooperation
The Value of Cooperation
As computers get faster, this gain in capacity (which technologists call “processing gain”) grows. Like a better antenna (“antenna gain”), processing gain means we can have more capacity in a particular radio architecture than otherwise would be possible. Technologists are increasingly discussing a related kind of gain called “cooperation gain.” Again, think about a party. If I need to tell you that its time to leave, I could choose to shout that message across the room. Shouting, however, is rude. So instead, imagine I choose to whisper my message to the person standing next to me, and he whispered it to the next person, and she to the next person, and so on. This series of whispers could get my message across the room without forcing me to shout.
Radios can achieve a similar gain from cooperation. Rather than blasting a message at high power so that you can hear it at the other end of the city, I could instead whisper the message to a receiver near me, and it could whisper the message to the next receiver, and so on. Through their cooperation, these nodes operating in a “mesh” could reduce the power required by any particular transmission. And if the power of any particular transmission is reduced, then the total capacity again would increase.
Radio technology thus tries to find the mix of antenna gain, processing gain and cooperation gain that maximizes total capacity for the system. And as Wi-Fi and meshed wireless networks are increasingly demonstrating, that increase in capacity can be realized without any centralized controller deciding who gets to say what when, or without allocating exclusive rights to “spectrum.” Instead, with the proper protocols and an etiquette between different protocols, radios can simply “share” spectrum without central coordination.
But wont such “sharing” lead to congestion? Wont this “commons” lead to a “tragedy of the commons”? The answer, surprisingly, is “not necessarily.” No doubt a bad architecture will quickly bust. But many believe that there are good architectures for spectrum sharing that would have the property of increasing spectrum capacity as the number of users increase. Its too early to know whether such systems will scale, but its not too early to see that their ability to exist depends upon lots of spectrum remaining free for experimentation.
Wi-Fi is the first successful example of these spectrum-sharing technologies. Within thin slices of the spectrum bands, the government has permitted “unlicensed” spectrum use. The 802.11 family of protocols has jumped on these slivers to deliver surprisingly robust data services. These protocols rely on a hobbled version of spread-spectrum technology. Even in this crude implementation, the technology is exploding like wildfire.
And this is just the beginning. If the Federal Communications Commission frees more spectrum to such experimentation, there is no end to wireless technologies potential. Especially at a time when broadband competition has all but stalled, using the commons of a spectrum to invite new competitors is a strategy that looks increasingly appealing to policy makers.
For more information, see the papers collected at cyberlaw.stanford.edu/spectrum. For a commercial implementation of “meshed” technologies, see www.meshednetworks.com.
Lawrence Lessig is a regular columnist for CIO Insight Magazine, and a professor of law at Stanford University Law School. He is the author of The Future of Ideas: The Fate of the Commons in a Connected World and Code and Other Laws of Cyberspace.
More from Lawrence Lessig: