During last months JavaOne Conference in San Francisco, Fortify Software convened a panel to discuss the role of application developers in software security and the need for appropriate development technology, without which genuine security is impossible to achieve.
Invited expert panelists were Gary McGraw, chief technology officer of Cigital, of Dulles, Va., and a widely read author on this subject; Bill Pugh, professor of computer science at the University of Maryland in College Park, Md.; David Wagner, professor of computer science at the University of California at Berkeley; and Bill Joy, co-founder of Sun Microsystems, of Santa Clara, Calif., and a partner in Kleiner Perkins Caufield & Byers, of Menlo Park, Calif.
The opening statements of these experts are shared here, and more of their subsequent discussion and their Q&A interaction with the invitation-only audience is linked from the eWEEK blogs.
That link can be readily found in the June 12 entry titled “Notes from Fortifys security panel at JavaOne” in the Archives section at blog.eweek.com/petercoffee.
Gary McGraw
Java is good because its type-safe. A lot of people that use Java may not even be aware of that, but the fact that theyre using it is very important and good.
The problems that we see in software security—from a technical perspective—often are related to the programming language C, which is kind of a disaster from the security perspective. Java did a lot to clean up the mess and make things a little bit more comprehensible.
But software security is about two kinds of problems: bugs and flaws. Its important to think about both. When youre working with Java, youll have fewer problems with bugs because of type safety, and youll have more cycles to spend thinking about architecture and about building in security from an architectural perspective.
Bill Pugh
A lot of people think that errors and defects and stupid mistakes are things that the “lesser programmers” make. One of the things that Ive found is that tools find insanely embarrassing bugs, written in production code, by some of the very best programmers I know.
People start thinking, “Because we have smart employees, we have a good development process; were not going to have stupid bugs.” But no. Everybody, every process, every person makes stupid mistakes. It just happens. The question is, What do you do to find and eliminate your stupid mistakes after they occur? Because theyre going to occur.
Next Page: Losing a battle, catching mistakes.
Losing a Battle, Catching
Mistakes”>
David Wagner
As a security person, I think were losing this battle right now. Were falling behind and we need to step up our game. Were getting better at security, but hackers are getting better faster than we are. About 80 percent of home computer users are infected with spyware. A new Windows XP machine has a mean time to infection of about 15 minutes. Were falling behind.
I would be careful of the “Its not my problem” syndrome. Developers think that “Oh, Ive got firewalls, so Im safe” or that security is about good operating systems, so its operating systems folks problem or networking folks problem.
Developers need to recognize its [their] problem.
Good application software makes a difference. In 2004, Internet Explorer had a publicly revealed vulnerability that had not been patched on 98 percent of the days [of that year]. Firefox was vulnerable on 7 percent of the days [of that year]. That tells you that what the application developers are doing can make a big difference.
Bill Joy
When I was at Berkeley in the 80s and late 70s, Eric Schmidt—whos now CEO of Google—and I were graduate students together, and Eric was a summer student at Xerox. He showed me Cedar, a type-safe derivative of Pascal, so 25 years ago we knew it was possible to write a programming language that caught dumb and obvious mistakes.
In the 90s, when James Gosling showed me Oak [the predecessor of Java], I realized that here was an opportunity to build a language where programs have meaning. When you write a Java program, theres a spec; theres a formal semantics. If the program can run—if its not a concurrent program—it will always give the same answer.
With the coming together of the need for security with the Net and of programming languages that are testable, you can come up with layers of abstraction. Javas just one layer. If you write your whole program without any higher-level description than just a Java program, eventually, it will be too hard to understand.
You need patterns of software, other layers, other notations, other ways to test higher-level properties of the software.
Thats the only way. These kinds of transitions take a really long time.
Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.