Oracle Corp. is sick of it.
Microsoft Corp. has been strutting with its newfound security street cred. Take its developers—theyre able to quote chapter and verse of the companys SDL (Security Development Lifecycle) blueprint for software creation.
But what about Oracle? Why dont we hear about securing coding from the database king?
The company has been facing growing criticism about poor quality patches and known vulnerabilities left unpatched for too long. Heres a typical complaint, from Dan Downing, vice president of testing services at business applications testing, hosting and managing provider Mentora: “Part of the reason there are so many [Oracle] patches is directly reflective of the poor quality of the code,” he said.
“If an application is mature—and every piece of software goes through this cycle at some point—there are no bugs, or few bugs that surface,” he said.
This comes after a history of patches that havent installed correctly, patches to patch patches, and then patches to patch the patches that were released to patch patches.
Oracle has had a no-comment, protect-our-customers policy on security issues. But its loyal customers are fed up with hearing Microsoft lauded while Oracles own secure coding practices are more or less black-box.
Oracle is sick of it. So now its talking.
John Heimann is the director of security program management at Oracle. He reports to Chief Security Officer Mary Ann Davidson and does the front-end work of security: setting standards, training, enforcing security checklists, determining secure configurations, working on secure-by-default initiatives and coordinating with marketing security products.
In a daylong tour of Oracle security given to eWEEK on Jan. 11, Heimann pointed out that the type of secure coding Microsoft is blabbing about nowadays had to be in place from the get-go with Oracle, who counts among its longtime customers numerous government agencies, plus commercial companies such as General Electric, Alcoa, Computer Associates and the like.
“From day one we were in a multiuser environment,” Heimann said. “We had to worry about authenticating users, controlling what users could see, from a very early stage in our product. Starting with Oracle 6, I think, we had our first real commercial database release. We had multiuser authorization, authentication, access and control.”
How its maintained that security, for better or worse, is of course multifaceted.
Most recently, Oracle is talking secure-by-default initiatives, for one thing.
The company is also solidifying its volume code testing. In December, Oracle announced it would use static code analysis technology from Fortify Software Inc. to hunt for bugs in C, C++, PL/SQL and Java as part of a program to improve checking for security holes during development, instead of trying to patch holes after the products out the door.
The Fortify tool had to stand up to brutal load. Oracles database alone contains between 40 million to 50 million lines of code. The tool had to scale to spit out results in a reasonable amount of time and be able to work on parallel machines.
“We want to get an answer in a day, not find out that two or three people have modified the product” while its dragged through testing, said Mark Fallon, senior manager of software development.
Fortify will be used across all product stacks and was being centrally installed this week.
Oracle is also evaluating an automatic black test, which checks at boundaries to see if SQL injections can get through. Its identified a possible vendor and is looking at rolling it out across the company, but Heimann declined to state the vendors name or timing specifics.
Of course, this isnt the first time Oracle ever tested code in a big way.
Oracle first started security evaluations in 1990 to pass the Department of Defenses TCSEC (Trusted Computer Security Evaluation Criteria, also know as the orange book) in the United States and Europes ITSEC (Information Technology Security Evaluation Criteria). 1990: Thats before the Internet, before Web applications blossomed to leak SQL injections and other poisons into back-end databases.
According to Duncan Harris, senior director of security assurance, when Oracle 7 was first evaluated under the governmental security schemes, Europe found one hole. Up until Dec. 1, 1999, there was only one other reported security vulnerability, and it was handled in a similar way to the first hole: by creating tapes and those newfangled things, CDs, to ship a patch to affected customers.
In February 2001, Oracle was tracking nine bugs. By September 2001 it crept up to 17. By December 2002 it leapt to 62.
“Thats primarily because external researchers really started turning their attention to Oracle, and that was in the early days of my ethical hacking group, and they had started to make a small impact as well,” Harris said.
Then came August 2004, the time of the ill-fated Alert 68, the first security alert that contained more than one fix for more than one vulnerability. Its problems were legion—for a sampling, go to Pete Finnigans Weblog and do a search on “Alert 68.”
Oracle has already been working with Fortify for over one and a half years. Also, some two years ago, Oracles customers started taking the company to task on code quality. Oracle responded by signing a volume purchase agreement with Mercury to bring in a volume testing tool and thereby launch an initiative to test better before releasing software.
In spite of it all, according to Downing, a “high level of skepticism” persists regarding quality when new patches or Family Packs—a group of previously released patches—are released.
“Theres an increasing recognition that at your peril do you put these patches and Family Packs into production without some real testing,” he said.
Bear in mind, part of Downings business is testing. But Oracle itself admits—has had to admit—problems with code quality. It was the infamous Alert 68 that ushered in an era of profound process change, according to Fallon.
“The processes changed dramatically since we did Alert 68,” Fallon said. “Now were making sure [development] follows exactly the same thing we do for everything,” he said. Namely, Fallons team crosses all development groups and holds the chokepoints to whether a product gets out the door.
Would the new chokepoint holder have choked Alert 68 in its cradle?
Hard to say. Fallon said he just doesnt know what details people had at the time and whether the information would have aborted the bad patch set.
Other changes spawned by the flawed Alert 68 include getting customer communications out as quickly as possible, Heimann said. Alert 68 also resulted in Oracle supplying risk matrixes so customers could get an idea of whether they should patch or not.
Aaron Newman, database security expert, chief technology officer and co-founder of Application Security Inc., said that when Alert 68 first came out, he had a number of customers call “specifically begging for information” on if they needed to apply the patches, and what exactly were the issues around the vulnerabilities.
“They havent been able to get that information from Oracle,” Newman said at the time.
We can see Oracles move to faster communication in the aftermath of the malicious Voyager non-worm code (Oracles touchy about the use of the word “worm,” since the code doesnt automatically replicate and spread) that was tweaked and re-released earlier this month.
Even though the non-worm was a result of insecure configuration on Listener accounts and not the result of a code flaw, on the day of eWEEKs visit, Oracle was rushing to get information to customers regarding proper configuration in order to batten down the hatches.
“Were being more responsive,” Heimann said. “We have a new security response process specifically targeted at that. We saw the response to the original Voyager posting. So were going out today [with an e-mail blast], on the second iteration of Voyager. We [now] have the ability to get this information out quickly.”
Problem: Speed kills quality. Oracle sometimes has to check to ensure that even locking down a given component wont break a 10-year-old version of a supported version.
And Oracles products are complex. And theyre getting more complex. Theyre getting more numerous with the acquisitions binge. And then theres Project Fusion, which will either wipe out past sins as the company starts with a brand-new architecture or will usher in brand-new sins, since it will be a brand-new code set. Realistically, it will do both: Wipe out old code sins and replace them with new code sins.
How will Oracle stay on top of all this code, as it buries its hands in the piles of code its acquired and wrestles it into Project Fusion?
One thing its started to do is root cause analysis. “When security bugs do occur, [were asking things such as] Why did this bug happen, were standards unclear? Was training sufficient? Is this bug a single instance of something, or is it more pervasive?” Heimann said.
Heimanns last point was echoed by Thomas Kristensen, chief technology officer of the bug-monitoring company Secunia, in a discussion over the 252 possible vulnerabilities reported in October by security researcher Alexander Kornbrust.
“Sometimes you can find some issues that appear to be individual vulnerabilities, but if you look at the underlying code … the number of fixes applied doesnt apply to the number found by researchers.”
Is a given flaw isolated? Is it indicative of a systemic problem? Is it a flaw or a feature, aka false positive?
Its software design. Its basically an art as much as a science, Heimann said. Oracles chief hacking officer drinks with the black-hat crowd. Oracles finding out new ways to break its code, just like the external security researchers, the David Litchfields and Alexander Kornbrusts of the world.
And then, just like any vendor, its figuring out how to fix that code, and how to make sure that next time, maybe it wont break—as much or, maybe, in an elusive perfect world, not at all.
Check out eWEEK.coms for the latest database news, reviews and analysis.