In order to make software more secure, the industry must get on the bandwagon for code scanning tools, incorporating them into the daily development cycle.
But are the tools currently sturdy enough to stand up to immense code loads?
Scanning tools are in fact getting better.
Theyre scaling better, for one thing.
Theyre able to run on parallel machines, which means they can handle much bigger code loads and get results to developers in a reasonable amount of time—in other words, before the code in question has been revised two or three times as testing drags on.
One remaining problem, though, is theres still nobody telling us which of the code tools are any good or what a given tool is specifically good at.
At this point, the work is hard, and theres just no independent body out there that can point you to the tool that will best fit your needs.
“Its kind of arduous,” Oracles Mark Fallon, senior release manager of software development, said in a chat we had before a presentation hes gave on the subject at the RSA Conference on Feb. 15.
“At the moment there are no publicly available benchmarks. There isnt a good body of knowledge [from which] to say Its these guys over these guys, Its these guys for this particular area, so you have to go through and do the evaluation yourself. Thats fine if you have 100 lines of code. We have 50 million lines of code.”
Thats a huge body of code with extremely complex paths wending through it. Oracle and companies with comparatively unwieldy code sets at this point have to bring the code in, get the code scanning tool working, make sure it can scan the massive body of code, and then evaluate its results to make sure that theyre real and not false positives.
“With any scanner company weve worked with, weve gone through iterations of where their tool couldnt handle our code, and weve worked with them” to fine-tune the tools ability to churn through the code set, Fallon told me.
Thats why, for example, Oracle, based in Redwood Shores, Calif., worked with Fortify for a year before signing on the bottom line to use its tool. During a year of tweaking, Fortify came in to Oracle repeatedly as the developers put their heads together to optimize results.
The Fortify deal was part of Oracles ongoing effort to knit volume code testing into its development DNA. In December, Oracle announced that it would use static code analysis technology from Fortify to hunt for bugs in C, C++, PL/SQL and Java as part of a program to improve checking for security holes during development, instead of trying to patch holes after the products out the door.
The Fortify tool had to stand up to a brutal load: Oracles database alone contains between 40 million to 50 million lines of code. The tool had to scale to spit out results in a reasonable amount of time and be able to work on parallel machines.
“We want to get an answer in a day, not find out that two or three people have modified the product” while its dragged through testing, Fallon said at the time.
How to Use a
Code Scanner Effectively”>
But deploying a code scanner is only one step in the process. As Fallon pointed out, such tools dont constitute a silver bullet; you cant ensure a secure code set merely by scanning at the testing stage.
As it is, when reviewing results of a code scanner, you still need experts from a given code base to look at the results, to make sure the results are real and valuable.
And as Fallon said, that means youre looking at taking at least a couple of high-end developers away from other work to help you read test results and to help you make sure you can parallelize the stuff and run it in real time.
Another delicate question that enterprises must confront when considering the use of a volume code scanning tool is how to persuade developers to go fix the errors that the tool finds.
Fallon recommends a three-step approach to that issue.
First, he said, youve got to eliminate the noise. Youve got to kill as many false positives as you can before you hand developers a set of results that are predominantly bogus and that dont reflect a keen understanding of flaw vs. feature.
“If you give a set of results to a developer and theres one real issue and nine bad issues, the chances are theyll hit the nine bad issues first,” he said. “Whats the level of confidence that the 10th issue will be something theyll want to look at?”
Enterprises have got to make sure the things theyre asking developers to look at are real issues, so that those developers gain trust in the scanning tool, and so as not to waste the developers time. Make sure developers are fixing things that need fixing, Fallon said.
To do that, youve got to work with a small subset of developers who have domain knowledge of the code youre looking at; work with them on what tuning has to be done with the tools.
With static code scanners, you have to teach the tools what aspects of a code set are functions as opposed to flaws, he said, adding that that will dramatically cut down on false positive rates.
Fallon suggested that you also need to eliminate flaw backlogs as well. Dont let known vulnerabilities propagate more positives—get them out of the way so developers can concentrate on keeping up with new developments in hacking technique.
Finally, make scanning part of the daily development process. “Its much more palatable for developers if they know were going to spend time doing cleanup, that a new vulnerability has come in and well spend time eradicating it from their code base,” Fallon said.
Giving developers instant feedback makes it easier for them to do their jobs, as well. Knowing you dont have to hold a release while you clean up code is a relief for everybody.
As far as establishing an independent body to evaluate the code test tools, NIST (the National Institute of Standards and Technology) has been working with the Department of Homeland Security in an attempt to document best practices. One area theyve been discussing with companies like Oracle is how to best manage software releases.
The feedback will hopefully help to raise the bar on secure coding. For example, the state of the art in security and vulnerabilities is moving ahead constantly. External hackers try new approaches all the time. As internal development groups do research in the area, its vital that they feed back into ISVs and create best practices that can then get picked up by NIST.
But the state of the art moves on, on both sides of the security fence. We have to make sure we keep ahead of it, and getting disciplined about volume code testing is one step in that direction.
Lisa Vaas is Ziff Davis Internets news editor in charge of operations. She is also the editor of eWEEK.coms Database and Business Intelligence topic center. She has been with eWEEK and eWEEK.com since 1995, most recently covering enterprise applications and database technology. She can be reached at firstname.lastname@example.org.