When a new security vulnerability is found in a piece of commercial software, the discovery inevitably touches off the seemingly endless search for a culprit. Who is responsible for the defect?
Typically its either a developer or a tester. Developers are the ones who wrote the faulty code; testers are the ones who should expect and be on the lookout for certain common vulnerabilities, such as buffer overruns and format string flaws. One of the two groups is usually guilty—or so the popular reasoning goes.
But security experts and software engineers are beginning to challenge that thinking, calling it outdated and myopic. The reality, they say, is that while developers and testers deserve a share of the blame for vulnerabilities, they are just small pieces of a larger software design and development process. And that process is inherently flawed and destined to turn out flawed products, according to the experts.
“Were looking at a terrible situation thats getting worse. By and large, the software community hasnt learned the fundamental principles of quality management. Were in a test-and-fix process,” said Watts Humphrey, a fellow at the Software Engineering Institute at Carnegie Mellon University, in Pittsburgh. “The problem is that defective software works. But its not secure. Theres no such thing as defective, secure software.”
The main cause of vendors turning out defective, insecure software is two-fold, according to Humphrey. First, senior management and project managers continue to write design goals and specifications that emphasize functionality and ease of use over security and reliability. Even now, very few enterprise CIOs or IT managers say that security is at or even near the top of their list of criteria when they buy new software. So vendors continue to do what is familiar and what has been successful in the past: produce highly usable, insecure applications.
Second, most developers and software engineers have little or no training in writing secure code and are relying on coding practices they learned in college. For some developers, those practices and routines could be decades old. Although most, if not all, software vendors send their developers to regular training classes, these courses rarely teach new techniques and instead end up re-emphasizing bad habits.
In an effort to change this culture, Humphrey has come up with two software development methodologies: TSP (Team Software Process) and PSP (Personal Software Process). The two methodologies are designed to help developers and software engineers turn out quality products that are in line with costs and on schedule.
“Every plan they produce misses managements schedule and is therefore unacceptable. As a result, they must work without the guidance of an orderly plan,” Humphrey said. “Under these conditions, the team will generally take much longer to complete the project than they otherwise would. The TSP teams responsibility is to plan and produce a quality product as rapidly and effectively as they can. Conversely, it is managements responsibility to start projects in time to finish when needed. When similar projects have taken 18 months and management demands a nine-month schedule, this is clearly unrealistic. Where was management nine months ago when the project should have started?”
Humphrey said he estimates that even experienced programmers inject one defect in every 10 lines of code they write—an unacceptable level. “One defect in a thousand lines of code is risky software,” Humphrey said.
The processes have been adopted at quite a few large corporations, including Microsoft Corp., which is using them to develop some of its internal software.
The Redmond, Wash., company has been using the TSP and PSP methodologies to update an application it uses to deliver software to its OEMs. The application has more than 24,000 lines of code, which contained about 350 defects the last time around, according to Carol Grojean, a senior program manager at Microsoft. Using the SEI methodologies, Grojean said she expects fewer than 25 in the new version. She attributes the expected jump in quality to having consistent expectations and goals throughout the development process.
“Most of the security defects are injected during specifications and design,” Grojean said.
However, despite the success of the TSP/PSP process internally, Microsoft has no plans at this point to use it on its commercial applications.
Security experts agree that developers with better training and a more disciplined approach to programming are key to producing more secure applications in the long run.
“We continue to see the same types of vulnerabilities show up in systems year after year,” said Rich Pethia, director of the CERT Coordination Center at Carnegie Mellon. “Very often, software comes out of the box in its least-secure state. We need wider adoption of risk analysis, more technical specialists, increased awareness of security risks and higher-quality software.”