Getting a Handle on

By Peter Coffee  |  Posted 2002-03-25 Print this article Print

Buffer Overruns"> Getting a Handle on Buffer Overruns Coffee: Alan, I know that buffer overruns enjoy a place of honor, if I may say it, on your list of things that have been around since the 1960s as a potential means of attack on systems. Do you think were ever going to get to the point where people write code that knows what kinds of input it should be receiving and knows to reject anything that doesnt look right?
Paller: I think automation will cut down the number of buffer overflows substantially—automated checking tools. It wont make it perfect, and certainly training programmers is also important, but we dont have a situation right now where we even run all code through these kinds of checkers to see what the buffers are. What I sense in future development is that some automatic program will go through, check the buffer limit and then go back to the programmer for an electronic sign-off on that buffer limit.
Lipner: You just described [Microsofts] PreFix and PreFast processes, Alan. Coffee: The discussion we just had actually segues beautifully into the next topic I wanted to address, which is the manner in which security threats and vulnerabilities become public knowledge or, for that matter, dont become public knowledge but are circulated on a more private way to people who are in a position to do something about them before general awareness rises. Steve Lipner, I know that Microsoft has taken a position that perhaps that discussion needs to be more carefully limited to avoid vulnerabilities becoming too widely known before remedies are in place. Can you comment on that? Lipner: Yes, theres been a lot of heat on that issue, and Im going to try to be pretty clear in what I say. There are two separate components. One of them is that, if there is a vulnerability in code, that vulnerability must be fixed promptly, and customers must be given the information they need to protect themselves from it. At the same time, [making widely available] what people call exploit code--basically the mechanisms where I can easily write a script or a program that destroys somebodys system or breaks into their system or displaces their Web site, or what have you-- is hard to defend. What weve been trying to do with other partners in industry--a pretty wide range of them--is to reach an agreement around the details of that fundamental set of points. Protect customers by getting the fixes out to them, and protect customers by making it harder to attack them. Coffee: So the argument that the circulation of exploit code is a socially valuable thing to do because it elevates the pressure on vendors to fix the problems quickly is not, I take it, one that you find persuasive? Lipner: I cant imagine operating under any more pressure than we are now with or without exploit code. Coffee: Mary Ann, with the "unbreakable" campaign receiving the attention that it has, the online discussion of attacks or possible attacks against Oracle has certainly increased by several orders of magnitude. How do you assess the current state in the community of discussing and addressing vulnerabilities before they become widespread means of successful attacks? Davidson: Its interesting. A lot of people use [the term] "hackers" like its a bad word. I think you have to take the long view and realize that theyre doing you a favor and that they almost universally contact us. We ask for [exploit code] because it helps us first validate that, yes, its a problem. It also helps us make sure that we have an appropriate fix. Coffee: When you say you ask for exploit code, you mean in the same sense that you would ask someone reporting a bug to send in a minimal test case that demonstrates the mechanism of the bug? Davidson: Absolutely. Its not like Im asking them to write it, but if they have it, please give it to me, and it just makes it that much faster. The issue for us with vulnerabilities--and I think it really is unique to Oracle, because we run on so many platforms-- is that we try not to release any alerts until weve finish all patch sets. Weve done as many as 78 patches for one vulnerability, and I think you can realize that we sort of have to train the hackers to be somewhat patient, because we cant do that in four days.
   I think we have a pretty good reputation with them. One of the gentlemen who has been reporting vulnerabilities and been vocal about it was in yesterday, and were very happy to work with him. Hes an excellent researcher and very ethical. Its better the devil you know than the one you dont. In the long run, I think these people are doing us a favor by working with us and helping us find these things and giving us a chance to fix them.

Peter Coffee is Director of Platform Research at, where he serves as a liaison with the developer community to define the opportunity and clarify developersÔÇÖ technical requirements on the companyÔÇÖs evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter companyÔÇÖs first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.

Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel