As Oracle Corp. grapples with its patch quality and speed, some in the security community have compared it unfavorably to Microsoft Corp. Oracle security execs say this is an apples-to-oranges comparison, given the vast differences between patching the complex server software Oracle produces and the client technologies that makes up much of Microsofts security focus.
Still, Microsoft has managed to turn around its reputation as a large bug target with poor attention to secure code, in part by pulling together a centralized security organization, MSRC (Microsoft Security Response Center).
In contrast, Oracle has chosen to route security issues back to the appropriate product group and developer in an effort to create an ownership mentality toward code flaws.
While some charge that Oracle thereby puts the fox in charge of the hen house by giving developers too much say in whether their own code is buggy, Oracle views it as part and parcel of an educational effort that reaches anybody and everybody whose fingers are in the product pie.
eWEEK News Editor/Operations Lisa Vaas had a chance to chat with Oracle Chief Security Officer Mary Ann Davidson about this and other security matters while on a trip to the heart of Oracles security setup in Redwood Shores.
Could you explain why a decentralized security approach makes sense for Oracle?
Theres a couple of practical reasons. The reason we have a security bug fixing team is not to fix the issues. Its to be an analytical process. (Its to analyze things like) “How do we have processes and systems to track issues? How do we make sure were prioritizing them so the worst ones are fixed first?”
How does this work on a practical level?
We have a group in the database team called DDR (Diagnostic and Defect Resolution). Normally they do a log of regular bug fixes. If a fault is reported, it gets processed and tracked by the group whose job it is to fix bugs.
Thats a centralized approach, though. When and why did you move the bugs back to the developers?
We did security bugs that way awhile, but eventually decided to move it back to developers who did the code for a few reasons.
The main one is you want people to make fewer mistakes. One way to do that is make sure they know what they did wrong the first time.
The second reason is accountability. You want to foster the idea that you need to understand how this defect got in there, how it was exploited. You want to see how a hacker got in there and how it was exploited.
Do you think this type of security structure will work better with Oracles growth-via-acquisition?
A small team will never scale. A lot of companies talk about having a culture of security. Thats the main thing you need, otherwise youll never have enough of people that you need.
Some things must make sense to centralize, though. Im thinking of encryption and such.
Policies, training, those things make sense to centralize. Instead of every developer having to run all the pages of secure coding themselves, we developed a Web-based training class we rolled out against the product stack. All you have to do is fire up your browser and take the class.
Similarly, when we make acquisitions, one thing we do is talk to the security weenies from the acquired companies. We say, Theres stuff we do to engineer security into our products, but youre probably doing some of the same things.
If youre doing something, if youve got something in your bag of tracks, we say, “Hey, lets incorporate that, so we dont have one PeopleSoft security way, and one JD Edwards security way, and one Siebel security way.”
Does that mean s we might be in different phases of doing stuff across the company? Yes, but we have one goal, not this guy doing it his way and some gal doing it her way.
Other things to make centralized are the technical aspects of security knowledge like encryption. So we have, as part of our cooking standards, are you using Oracle licensed encryption routing? Weve already licensed, vetted, and probably have fixed (whatever theyd have applied it to).
A Hybrid Approach
So this sounds more like a hybrid approach: Centralized where it makes sense, decentralized on the nitty-gritty, specific product level.
Half of our team focuses on assurance: ethical hacking, engineering security into the development cycle. If we didnt get it right, how do you handle the bugs? Other half (focuses on) program management. What has to happen across the product stack? What are the right problems to solve, not the sexy ones but the very simple, stupid (things), (like someone) is trying to get everybody lined up, when do you ship the product, and you ought to be able to tell customers how to lock it down, make it easy to get there.
Youve challenged the responsibility of security researchers who hack Oracle and publicly report unpatched vulnerabilities. Yet you hire friendly hackers yourself. How do those two things jibe?
In terms of hiring researchers, there are multiple facets to this. Theres hiring them as consultants, and theres who we hire to be hackers. In either case, the big quality we look for, aside of technical acumen, is ethical behavior.
(Sometimes Im asked,) “Why dont we hire so and so.” Part of the issue is I tell people “Look, if somebody in the past has been untrustworthy … I cant write a contract with somebody I dont trust.” (A contract is) not there to create trust when there isnt trust. If somebody did kiss and tell, I dont have any practical recourse to enforce (the contract).
Why not? A contract isnt binding?
If Im going to sue that guy, its the mean old vendor who beats up the researcher whos just trying to protect customers. Its not so much Im worried about looking bad. Its all about confidence, but also trust. If somebody signs something under contract, I dont have to worry that theyll be selling us down the river.
For example, one guy we hired as an ethical hacker started as a regular tech guy who worked at some company. He was finding technical things and sending them to Oracle. He was really good at it.
We went back and said, “Wed like to hire you to do technical assessment.” He said his employer wouldnt do it. He moved on to work for a security researcher, and we wound up hiring them. And they turned out very professional, they did wonderful reports to (find vulnerabilities).
We liked him so well, we poached him. In a professional way.
So when you talk about trust, youre referring to somebody who wont air your dirty laundry before youve had a time to patch systems, yes?
Thats the issue. You want to protect a customers system, which means when you find something, you want time to act on it, obviously very aggressively, and you want to make sure customers are protected.
Its not a hacker vs. (us thing). Its working with people you can trust who are going to put your customers in the forefront.
I know youre hot on getting NIST to pass standards about hardening products, so technology will be more turnkey, right?
They (should be able to) push a button, and you give them a tool to monitor that. And you automatically tell them when theyre still locked down, etc. It seems so obvious, why hasnt it be done? I just got a car. I didnt have to say, “Wheres the configuration part for the brakes?” All this security stuff is just there. I wouldnt know how to disable brakes on my car: Theyre just there.
You ought to make it easy for somebody to run a reasonably assure configuration without them having to do too much.
The government has caught on to this. … They said theyd save millions by having Microsoft deliver hardened, locked-down (product).
Secure by Default
But Oracle now has a Secure by Default initiative going on. Hows it going?
We have an initiative that (to) do this across the product stack. Wouldnt it be nice if we could lock down the database and make sure other products will accept that and not break? Its amazingly hard to do, when youve got five product stacks, (and so many) platforms.
I feel so strongly about this I keep lobbying the federal government to make it a procurement practice. A Secure by Default program.
So youre saying the industry is shipping junky products now and has to be regulated to get it to snap out of it?
Analysts are in good a position, and press too, to start answering questions like, “What are my cost of ownership going to be for this product? Will the patching costs will be terrible? The previous version had lots of worms.” You want to know what youre getting and what it will cost you. (Companies) can also use their purchasing decisions to push (their) vendors.
Arent they doing that already?
I dont think customers are as knowledgeable. Theyre not coming in and saying, “Hey, give me crappy products.” I think they dont know what to ask. I want empowered customers who know how to push their (vendors).
Can you actually comply with a Secure by Default mandate on the part of government procurement practices?
When I talk to development, I say Im pushing government to do things that we cant comply with right now. I said, “Please dont wait to do things under duress. Lets get ahead of that curve.” Its just good, its one way to improve the industry. We collectively need to improve.
What if civil engineers built bridges the way vendors built software? Its there, its safe. You dont worry when you walk into (a restaurant) that it will fall down.
Were used to physical structures being safe. (With software, were) used to things being down and broken.
Software is like the Wild West. Anybody can write code, and if they do a bad job, well, thats fine.
Its time to grow up and make software reliable as physical infrastructure, because it is physical infrastructure. It means universities changing what they do, and companies hiring people who write good code and not sexy code, and maybe licensing.
You want to be a leader here. Its not enough to be good. We need to foster (improvement in the industry). Im not proprietary about doing good, secure coding.
Wed rather compete on feature/function, rather than who has an uglier baby.
Check out eWEEK.coms for the latest database news, reviews and analysis.