Where do we get our ideas of whats smart and whats stupid to do? That question largely inspired this months Youth Summit for Online Safety, conducted at the University of California, San Diego, with sponsorship from Microsoft and nonprofits i-Safe America (www.isafe.org) and TakingITGlobal (www.takingitglobal.org).
I was asked the question above during a call with summit participant Yoshi Kohno, a doctoral candidate in computer security at UCSD. Kohnos research examines methods of cracking security systems, and he co-authored the July 2003 report "Analysis of an electronic voting system" (with Avi Rubin and others) that examined bugs in Diebolds electronic voting software.
Kohno told me that the perverse thing about digital content networks such as the Internet is that their security tools are almost as obscure as their security flaws. "People know how to lock a door; they see a seat belt in a car, they know what its for," he said. "But members of my family sometimes have no idea of how to respond to a firewall message."
I asked Kohno if todays apparently tech-savvy teens are really the group most at risk or if the AARP demographic isnt at least as vulnerable as a fast-growing population of users who often dont recognize Net hazards. "One could think about generalizing this," Kohno said, but he referred me to research by i-Safe indicating that younger users are more at risk because they think of the Net as a social environment—while older users tend to think of it as a workshop or a transaction environment.
On further reading of that data, I must agree: When youre working with a tool or making a purchase, it does seem likely that youll have your guard up against accidents or malice in a way that you dont when youre just visiting with friends. For all their supposed technology comfort, therefore, the younger generation may be at greater risk than their elders.
"People need to develop an appropriate sense of paranoia," Kohno said. "A dark alley gives you visual cues that this is a dangerous place." The alternative, Id add, is to think we can light up all the alleys or banish all muggers—but we havent succeeded in doing either in the physical world, and there are many reasons why it would be even harder to make the online world intrinsically safe.
Cyberspace jurisdictions are more difficult to define than in real space; new types of behavior emerge more suddenly in a medium of bits than in a world of physical objects and events. That doesnt stop some people from thinking they can police cyberspace as if it were a technical, rather than social, environment. In the process, theyre turning it into a dense, tangled forest of regulation.
For example, look at the mess of miscellaneous mandates weve made in the United States from the formerly sharply focused Communications Act. That 1934 law created the Federal Communications Commission for a purpose that actually made sense: the regulation of "interstate and foreign commerce in communication by wire and radio" to produce an efficient, reasonably priced, worldwide service for the purposes of national defense and to assure the safety of life and property. Thats a set of purposes that any constitutional scholar could argue to be acceptable uses of federal power.
Since its initial passage, though, the Communications Act has been amended to do everything from mandating honesty in quiz shows (1960) to funding development of childrens television programs (1990). This approach tries to make the world of media intrinsically safe, instead of taking the more realistic approach of the UCSD forum, which seeks to help people learn to recognize dangers that can never be eliminated.
Continuing amendment of the Communications Act is neither a viable strategy nor a defensible use of government power. The scarce resource of the radio spectrum and the natural monopolies of circuit-switched networks called for centralized regulation; the nearly unlimited capacity and decentralized nature of digital networks dictate loosening rather than further complicating those controls. The challenge is to replace ineffective, increasingly illegitimate control with more effective social understandings—for every generation.
Technology Editor Peter Coffee can be reached at firstname.lastname@example.org.
Check out eWEEK.coms for the latest news, views and analysis of technologys impact on government and politics.