It’s time for a change or two. Or six.
Fundamental problems with the way organizations develop software go, if not ignored, largely unaddressed for far too long. Instead of refusing to employ flawed software, buyers accept bugs, vulnerabilities, corrupt files, system crashes and unpredictable behavior as a cost of business. Weak programming practices mean not just infections of code, but, in the worst cases, revenue- and profit-sapping downtime for corporations, and injuries or even fatalities for humans.
This isn’t to say that some software quality isn’t high. Safety-minded, serious developers have built systems that allow remote-control vehicles to roam the dusty soil of Mars, let telescopes peer through the vastness of space to glimpse the universe’s distant past, and permit jet fighters to stealthily pierce the sky.
Yet everyday life now can’t run without reliable software: in appliances, tools and toys; in pacemakers, infusion pumps and radiation-therapy machines; in factories, power plants and office campuses; in trains, planes and automobiles.
Precisely because of software’s ubiquity–especially in the machines entrusted with people’s lives–“good” is no longer good enough. Only rock-solid software that users can operate without fail and that machines can follow predictably is permissible now.
“There’s a huge amount to be done,” says James Gosling, the Sun Microsystems vice president who was instrumental in the development of the Java software-development product line.
Where to begin?
Baseline gathered the opinions of more than 20 software and safety experts—including Gosling; Bill Joy, former chief scientist at Sun; Herb Krasner, director of the Software Quality Institute (SQI) at the University of Texas; William Guttman, director of the Sustainable Computing Consortium (SCC); Mike Konrad, a senior member of the Software Engineering Institute (SEI); Pradeep Khosla, who heads the Electrical and Computer Engineering department at Carnegie Mellon; Gary McGraw, chief technology officer at Cigital; and Adam Kolawa, chief executive of Parasoft.
Here’s their collective prescription for fixing what ails software development.
Too many people building programs lack skills. “Lots of people call themselves software engineers who are not,” says the SQI’s Krasner.
This often means the original design specifications for a software product are inadequate. In the end, these “engineers” can’t assess the risk that the software may not work as intended.
To be a doctor, one must get a college degree, pass medical exams, complete an internships and than take a series of tests to practice in a particular specialty. Accountants, engineers and lawyers also most go through rigorous testing and certification processes.
“That doesn’t happen in software,” Cigital’s McGraw says. “You can declare yourself a software architect and off you go.”
Organizations such as the Institute for Certification of Computing Professionals (ICCP), the Institute of Electrical and Electronics Engineers (IEEE) and the SEI give exams that cover everything from systems development to data management to the various tools and techniques being used by developers.
But, in the end, it’s the companies paying for software that hold the power to demand certification. Today, too few even consider whether the software they buy comes from certified developers.
Steps 2 Through 4
Software creation is increasingly a collaborative process. That has led to systematic approaches of reviewing the quality of team-created applications.
The Capability Maturity Model, developed by the Software Engineering Institute at the request of the Defense Department, establishes whether a given organization has mastered good software development practices. These include the reliable setting of specifications; proper means of evaluating code that has been created; the ability to set and track internal performance metrics; and consistently finding better ways to develop software. Organizations work their way up five levels of maturity model team certification.
Parasoft’s Kolawa says a software professional also ought to be certified in a particular industry, be it finance, pharmaceuticals or aerospace. If software quality is going to take any leap forward, Kolawa says, “this type of certification of specialty will have to happen.”
As exemplified by the unexpected fatalities that resulted from the way radiation machines were used in Panama, developers never can anticipate fully how their applications will be used. Yet too often developers don’t spend the time and energy needed to find out what users really want and need.
McGraw calls this the “sneaky, dirty little secret of software development.” Even in conventional business applications, the developer philosophy often is: “If they don’t tell us exactly what they want, we’ll just give them something,” he says.
The starting point for fixing this software-development flaw is simple: a precise list of what a new program is supposed to do that can be agreed upon by the people developing the software and the people who will use it. Then teams must double- and triple-check the code they create to make sure users can’t ask it to do tasks that aren’t anticipated or cause unexpected conflicts in calculations.
Building software requires engineering as serious as the kind required for high-rise offices. Just as there are building codes for skyscrapers, so now are serious developers following code codes.
In its Code Conventions for the Java programming language, Java’s progenitor, Sun Microsystems, clearly delineates the number of code statements—known as “declarations”—that should be written per screen line (one) and how long each line should be (not more than 80 characters). These conventions recognize such basic facts of code-writing life as how computer terminals “wrap” lines of characters that appear on their screens. The conventions also outline how to clearly name files and how to insert helpful comments into lines of code.
Code conventions are important because they make the code easier to read—and maintain—by people who haven’t worked on it.
Jack Ganssle, an engineer whose Ganssle Group advises companies and developers on how to create high-quality software, acknowledges that “a lot of software engineers think that this [discipline] is totally worthless—a way to depress their creativity.”
But, he notes, “they’re wrong. If the source code is not readable, [if] it’s not absolutely clear, how do you think you can possibly audit it, understand what it’s doing and look for errors?”
Steps 4 And 5
Sure, a team can test its code and still not find all the problems. But too often, that observation is used as a reason to avoid further testing, not just in development but after the code’s put into use.
Any given program can be tested for reliability, security and performance when it’s completed. But software can be tested even when it is just a “component” of a system.
Testing tools are widely available from such firms as Empirix, Mercury Interactive, Parasoft and Software Development Technologies (SDT). But, says Gosling, “people don’t use them.”
Testing ties up personnel, and adds to a project’s overall cost.
Since many organizations wait until the end of development to test, projects that are just about to come in “on time” and “within budget’’ often fail to do either.
Krasner, Guttman, Gosling and others agree that one solution can be a software version of the independent, not-for-profit Underwriters Laboratory, which reviews electronic equipment. Such an independent service would provide a seal of approval that a given piece of software or a software-based system is safe. Vendors who find safety to be a fundamental feature of their product—those whose software runs equipment that affects human lives, for instance—would voluntarily submit their products to the lab. If the software checked out as safe and reliable, it would be stamped as suitable for life-critical applications.
Independent testing isn’t exactly new. For more than two decades, the National Software Testing Lab in Blue Bell, Pa., for instance, has been creating and managing tests for everything from servers to wireless devices to software applications. Its clients include Dell, Intel, Nokia and the Canadian government. Keylabs of Linton, Utah, which says it has done work for American Airlines, Charles Schwab and Visa, provides similar services.
But there’s no generally accepted seal of approval for software.
Perhaps the biggest reason mediocre software persists—and threatens lives—is that individuals and corporations keep buying it.
“People put up with it,” says Jonathan Jacky, a scientist working at Microsoft Research.
Software might be the only product designed by a group of people called engineers that’s released and known to be imperfect. No one expects a building to fall, a bridge to collapse, a train to derail or a plane to crash. When any of those fail, shock is followed by accusations, inquiries, penalties, and, sometimes, legislative efforts to make sure the problem doesn’t recur.
Not so with software. According to the Cutter Consortium, an information-technology consultancy, almost 40% of 150 software-development organizations it polled last year said they didn’t believe their organizations had an adequate program in place to ensure that their software was high quality.
Cutter senior consultant Elli Bennatan notes that 29% said their companies didn’t have a quality-assurance professional on staff with any real authority, 27% said their companies didn’t conduct formal quality reviews, and 24% didn’t bother to collect software-quality metrics.
And 32% said their companies released software with too many defects. “If you don’t demand quality, you don’t get it,” SQI’s Krasner says. In effect, users and developers of software must begin demanding quality, and backing those organizations that certify developers, such as SEI, or those that support development of reliable code, such as the SCC.
Otherwise, it will be lawyers of victims, like those in Panama, and legislators or regulators that will be demanding it—in civil court and in statehouses.
Or, in the worst case, in the penal code.
To find out more about the Sustainable Computing Consortium, including how to join the organization, contact Larry Maccherone, Associate Director, CyLab, (412) 268-1715; LMaccherone@cmu.edu.
To find out more about the Software Engineering Institute and the Capability Maturity Model, go to www.sei.cmu.edu/cmmi/ or e-mail email@example.com.
For information on ICCP certification, go to www.iccp.org.
For information on IEEE certification, go to www.computer.org/certification/.