Each day, it seems, we are asked to depend ever more on the Internet for our professional and personal well-being.
Each day, it seems, we are asked to depend ever more on the Internet for our professional and personal well-being. Yet each week seems to introduce a new computer worm capable of boring into and through our networks, clogging pipes, corrupting data and, in the worst cases, destroying months or even years of hard work. But the very companies that tirelessly tell us that the Internet is fundamental to our future have done almost nothing to protect us from the very defects in their products that give hackers free rein. Its high time Microsoft, Sun Microsystems and other developers undertake an all-out commitment to eliminating buffer overflows.
In fact, it borders on criminal that they have not done so already.
There could be no Code Red, or dozens of other worms that have plagued the Internet in recent years, if it were not for buffer overflows programming bugs that have been around since the dawn of computing, and that have long been recognized as a vulnerability that hackers can exploit to spread security nightmares through networks. And yet operating system developers in particular the very companies to which we entrust the fundamental safety of our systems and data have refused to invest the programming resources and time required to rid their code of these Achilles heels.
Theres a simple reason for this: Its hard, expensive work sifting through tens of millions of lines of code searching for buffer overflows. Its far cheaper to let hackers find and exploit unprotected buffers, then release a quick patch. The problem with that kind of reactive solution is that we all pay a heavy price in corrupted data, clogged bandwidth and sheer frustration by the time the problem is repaired. Even worse, the decentralized Internet provides no means of communicating newly discovered dangers to each user of a vulnerable program, so many users never discover they need a patch until its too late.
Buffer overflows, or overruns, are easily exploited holes in otherwise secure programs. A buffer is a chunk of a computers memory or disk drive, of limited size, in which data is stored temporarily. If a user or other source of input tries to shove more data into the buffer than it can hold, the data "overflows" into adjacent parts of the memory or disk. This would be a mere nuisance, except that the excess data can erase and replace programming code adjacent to the buffer, enabling a hacker to insert malicious code into the target software.
Programmers have known for decades how to prevent this kind of bug by checking the buffer size and then limiting or filtering input. In the grand scheme of things, it takes only a few lines of code to add buffer checking. Yet programmers often neglect this crucial safety feature, and the resulting vulnerabilities are frequently not caught in multiple stages of debugging.
Developers say new code routinely includes checks on all buffers. Hackers, they claim, are typically exploiting unchecked buffers in legacy portions of the code. Even if true, this excuse is ridiculous on its face. Microsoft and Sun, for example, think nothing of investing hundreds of millions of dollars to develop new bells and whistles for their products, yet they have failed to eliminate a simple bug thats been around for decades. If hackers can find and exploit unchecked buffers to make our lives miserable, clearly these giants must find and fix these buffers to protect us.
It is time for a proactive commitment on the part of the largest developers to eliminate this preventable plague if for no other reason than it would be a very wise investment. Whats at stake is our basic faith in the Internet as a trusted platform for commerce, finance, personal information and entertainment. Only the largest developers can ensure the safe cyberneighborhoods that will attract us all to their products.
Rob joined Interactive Week from The New York Times, where he was the paper's technology news editor. Rob also was the founding editor of CyberTimes, The New York Times' technology news site on the Web. Under his guidance, the section grew from a one-man operation to an award-winning, full-time venture.
His earlier New York Times assignments were as national weekend editor, national backfield editor and national desk copy editor. Before joining The New York Times in 1992, Rob held key editorial positions at the Dallas Times Herald and The Madison (Wisc.) Capital Times.
A highly regarded technology journalist, he recently was appointed to the University of Wisconsin School of Journalism's board of visitors. Rob lectures yearly on new media at Columbia University's School of Journalism, and has made presentations at the Massachusetts Institute of Technology's Media Lab and Princeton University's New Technologies Symposium.
In addition to overseeing all of Interactive Week's print and online coverage of interactive business and technology, his responsibilities include development of new sections and design elements to ensure that Interactive Week's coverage and presentation are at the forefront of a fast-paced and fast-changing industry.