In a roundtable discussion moderated by eWEEK Technology Editor Peter Coffee, some of the industrys top security experts spoke candidly about the ability to secure e-business; the responsibility and culpability of the vendor, IT management and hacker communities; the security challenges inherent in Web services; and how enterprise IT can not only respond to but also stay a step ahead of security problems.
Their verdict? Attitudes are improving, but core problems will take years to fix.
Click here to listen to the audio of the Security Roundtable
Mary Ann Davidson, chief of security, Oracle Corp., Redwood Shores, Calif.
Ed Glover, director of enterprise security and customer engineering, Sun Microsystems Inc., Palo Alto, Calif.
Brian LaMacchia, lead .Net Framework security developer, Microsoft Corp., Redmond, Wash.
Steve Lipner, director of security assurance, Microsoft
Alan Paller, director of research, The SANS Institute, Bethesda, Md.
Steve Trilling, director of research, Symantec Corp., Cupertino, Calif.
Peter Coffee, eWEEK technology editor
Coffee: I want to begin by asking if you think there is a greater acknowledgment of the security problem. Do you think that computer users and system operators are more inclined now to treat security as a shared responsibility instead of something thats just supposed to happen in the basement?
Trilling: Absolutely. In fact, I would say even more strongly that the notion of security has really moved to the boardroom and CEO level, as well as to the legislative level on the government side. What used to be more of a purchasing issue is now something that really needs to guide the fundamental business strategy of any organization.
Just to give an example, a company like Dow Chemical was estimated to have done $1 billion of revenue online alone in 2001. Clearly, lots of organizations across all spaces, not just technology spaces, are generating a lot of income and a lot of business from online activities. The need to secure all those transactions, both going outside the organization as well as inside the organization, is going to play a fundamental role in any business strategy moving forward.
Coffee: This question goes to our Microsoft panelists: Microsoft is at the beginning, as we speak, of a one-month lockdown, where youre not going to write any new code but are going to be reviewing your existing code base for integrity issues. Can you comment on that, and on the effect on the company of the very high-level directives youve been receiving from your top management on that score?
Lipner: Microsoft has always had a focus on security, but weve [also] had a focus on usability, a variety of focal points that weve tried to balance. I think that Bills e-mail [a memo to staff articulating a broad-based plan to combat security and reliability problems in the companys products] has a clear impact of changing the balance. That really gets a lot of very bright people very energized to do security in a much higher priority way, and thats going to have a long-term impact on the security of our products.
LaMacchia: Just to add onto that, we in the developer division actually went through sort of a miniature version of this security push, as we call it internally, last December, just as we were finishing up Visual Studio .Net. We took the entire developer division and focused it on trying to find exploits on top of our own code, and that was a very productive time. For those of us who are working in security day in and day out—on security features and penetration testing–its quite refreshing, actually, to be able to get all of the energy that normally goes into all of the various aspects of Microsoft and focus on our particular area to help make the product stronger. Now were actually carrying that over, and the fact that we have this extra month to do even more testing and even further analysis is great from our perspective.
Coffee: Oracle has really thrown down a challenge to the dark side of the IT community and a promise to the IT buyer with the “unbreakable” label. Was that something you did because research indicated it was what people wanted to hear, or did it come from an internal conviction that this was the time to put that stake in the ground?
Davidson: I think we should differentiate between[Oracle Chairman and CEO] Larry [Ellison] deciding this is a great marketing campaign and what weve always been doing.
Just to recap, we didnt actually have a compulsion to do something like stop development for a month, although I certainly laud Microsoft for focusing on that.
Since weve been doing security evaluations for over 10 years, weve already built in processes that say that security isnt an afterthought. Its something that weve built into our development processes from the get-go, and, as a result, we have a culture of security.
For example, weve always been willing to stop a release if there was a large enough security issue that we would need to address before the product goes out the door. Every one of those evaluations represents about $1 million in additional security [quality assurance] by someone other than Oracle to make sure that, not only is the product robust, but that our secure development processes are spot-on.
The only difference with “unbreakable,” really, is that weve already begun taking those processes and moving them across the entire product stack–and that includes everything from secure coding standards to security checklists before we release the product. We continue to do formal evaluations, and we have an ethical hacking team that does internal risk assessments. So, for us, “unbreakable” is sort of more of the same. The difference is that were sticking our neck out and saying that everyone ought to take the “unbreakable” pledge. Everyone ought to be willing to step up and say that your product has to be bulletproof.
Coffee: Alan of the SANS Institute is the only one here who never has to wonder if a product shipment should be delayed to meet a security goal. Alan, whats your comment on what weve been hearing from our industry participants? Do you think that theres been any change in peoples willingness to hold up the shipment of a product to meet a security criterion, or do people still push it out the door and let the user discover the problems?
Paller: I think theres a real change thats taking place, and its palpable in Microsofts products already. The weakness, though, is its solving about one-tenth of the problem and making 90 percent of the noise.
The Other 90 Percent
The Other 90 Percent
Coffee: Whats the other 90 percent of the problem?
Paller: Its the configuration of the systems. Were about to come out with the first draft of a step-by-step guide to securing Oracle, and were having a terrible time trying to make it short enough so that we can publish it.
Coffee: This would be along the same lines as the joint report you released with the FBI, showing the default installation configurations and the known vulnerabilities they create are perhaps the largest problem out there?
Paller: I think its probably the biggest problem, yes.
Davidson: Alan, if you dont mind my interrupting, if you would send me that document, I promise that I will take a hard look at that, and, wherever possible, we will try to change the default configurations to make it much easier to be secure out-of-the-box. Were already doing that, and I would welcome your input.
Paller: Absolutely. Ill be happy to share it with you.
Coffee: Ed, I havent put you on the spot on this question yet, but is there anything youd like to throw into the pot on the general question of the level of willingness to acknowledge the existence of a security problem, either inside or outside the vendor organization?
Glover: I focus more on talking to customers, and what were finding is that customers are a lot more aware of the security issues out there. What Im finding from that point is that theyre not sure where to start. Theyre having a lot of difficulty trying to figure out what to do.
I think its great in a lot of ways that the vendors are putting more emphasis on security. I think more people are feeling that [they should] come out with default security in their products, which is terrific, and I support that 100 percent. But, really, at the end of the day, when people integrate these [configurations] into their environments without really understanding the true risk and understanding the vulnerabilities, a lot of times they open things up. We find that all the time in our assessments–that people take what is installed as good security and kind of make it bad security. People arent really educated on what good security is about. They recognize theres a problem, they understand theres a problem, but theyre not sure how to approach it.
Coffee: Alan, whats your assessment of the investments that enterprise organizations are demonstrating a willingness to make in training and, for that matter, just hiring enough people to do the security job correctly?
Paller: My sense is that thats a new willingness—that the shift after Sept. 11 has been from security being a job for the security department to security being a job for the operations people. As soon as they figure out that every network administrator and every system administrator actually has to know this stuff, theyre beginning to invest in reasonably large numbers in getting these people trained. Until the recession started, most of our courses were sold out. Through the recession we had lots of space, but this month weve gotten back to over half of all of our programs being sold out again.
Coffee: So, as IT spending rises, youre anticipating that security will be one of the big beneficiaries of that?
Paller: Yes. Sadly, too much of that money is going into studies and not enough into actually locking down systems.
Coffee: Let me elevate the discussion about 10,000 or 20,000 feet: Is the core technology of the Internet securable?
Paller: There have been meetings as high up as you can go in the country on exactly that topic, and the general conclusion is that, although in the short term you can do some good things, there are some fundamental changes that have to take place. By the time its all adopted, were a decade away.
Coffee: So we cant solve the problem with even massive initiatives like the .Net Framework, in its effort to provide a secure runtime environment for Microsoft services, for example?
Paller: But every one of these steps is a wonderfully important step. I dont want to belittle anything.
Coffee: Ed, Sun has been part of the core technology of the Internet since before it was a publicly aware topic. What do you view as the things that Sun is trying to do to deal with these core technology issues?
Glover: As Sun continues to come out with new technologies, security is an extremely key part of its vision and how to address it. Were dealing with a lot of inherent issues that are going to be extremely difficult to solve, just because its an afterthought, and any time you think of security as an afterthought its always going to be a problem. Sun has tried to take a leadership role in this, and tried to identify the security that needs to be implemented and built into the products. [Were] working with our vendors and our partners and our suppliers and everybody out there in trying to make sure that security is addressed properly.
The biggest thing I see out there as a problem is that, a lot of times, everybody is kind of going in various directions and trying to get standards and things agreed upon, and thats still going to pose a major problem in the future.
Stepping Up to the
Stepping Up to the Plate
Coffee: I always watch the Symantec hoax virus site to take the temperature of how aware people are of what really is going on out there. Steve Trilling, do we have users who are more willing to be participants in making themselves secure, or do they still want to essentially be an audience for content and rely on the supply side to deal with the security issues?
Trilling: I certainly think that if there is any silver lining with all of the high-profile attacks, its that people are much, much more aware of the potential downside from these threats and much, much more willing to take appropriate steps to secure their own systems. This means corporate users as well as home users. You think of all of the information that organizations and home users used to store in filing cabinets, in drawers, in large warehouses that are now stored on hard drives, and I think that the general level of awareness in security has certainly increased.
At the same time, there is a little question that the issue of securing the Internet or securing any organization is not just a technological one, but is a human one. As we saw, for example, with the Code Red threat this summer, everyone who appropriately patched their Web servers was not hit by Code Red and also did their part to help protect the rest of the Internet. In a perfect world, everyone would have patched their systems, and that threat would have never spread. So there is certainly a lot of human education and human effort that goes along with this, but theres no question in my mind that the level of consciousness across the consumer and corporate and government space is very much increased over where its been a year or two years ago.
Coffee: With the rollout of Windows XP, Microsoft has tried to make the notion of user involvement in maintaining system configuration less important than it used to be by initiating the idea of automatic updates. The system is always finding out what patches are available and installing them itself. How has the response to that been?
Lipner: Its been very positive, and one of the things that were trying to do is just to get the word out that those features are built in and that its a key factor in making the Internet experience safer for consumers and businesses. I looked at the download numbers for one of the patches that we released late last year, and within a matter of three or four days we were up in the 5 million downloads range, thanks to the auto update and the Windows update technology.
Coffee: Brian, I dont want to put you at a disadvantage, but today [Feb. 14] I believe there was an announcement that there was a vulnerability that had been discovered in the .Net Framework. I dont want to beat you up on that, because weve all just found out about it, but I wonder what your comment might be on the difficulty of persuading people that intrinsic security of the platform is higher than it used to be when the Framework is practically just out the door and were already starting to find issues with it.
LaMacchia: I think youre referring to the [report] that came out from [Cigital Labs Chief Technology Officer] Gary McGraw on increased protection against stack overflows and buffer overruns that we added to the C++ compiler.
Coffee: Yes, that was the vulnerability I had in mind.
LaMacchia: Let me try and just spell this out, because its not a vulnerability. We added a compiler switch to the unmanaged C++ compiler that we shipped as part of Visual Studio. [We did this so] we can throw the switch at some additional checking as [a defense against] back-smashing techniques, which obviously are one of the common ways that people exploit buffer overruns that you put into your code.
What Gary basically says is that this is a reasonable technique. Its been known out in the community and on certain other platforms folks have used before, but its not 100 percent guaranteed in that it makes it more difficult to exploit buffer overruns that you have in your code. It doesnt completely seal things off, and thats true.
The point of this feature was to basically give developers a way to increase the defenses that they had against their own buffer overruns being exploited. Obviously, what we want people to do is not write buffer overruns in their code, and when you move into the management environment that you have on the .Net Framework, you dont have that problem at all, because we do type safety verification on everything that comes in. You cant actually overrun buffers because we do the memory management for you. So its not a vulnerability in the compiler, but it is a feature that provides some added defenses if youre not moving over into the new managed code phase.
Lipner: As Brian said, this is not a .Net common language runtime issue thats been raised. Its rather a Visual C++ compiler issue—a different language, different technology. The second point is that there are as many ways of running a buffer overrun as there are of writing a program or building a touring machine, to be overly technical about it. What we do, for example, in the Windows division effort is use that compiler technology, and we use automated tools to scan it at basically the static source code to detect places where buffer overruns may be, and we train developers to not write buffer overruns. Among these three measures, we hope to get pretty good coverage on this issue, but one of the things we say is security is a journey, not a destination. At the end of the day, you dont get to perfection.
Getting a Handle on
Getting a Handle on Buffer Overruns
Coffee: Alan, I know that buffer overruns enjoy a place of honor, if I may say it, on your list of things that have been around since the 1960s as a potential means of attack on systems. Do you think were ever going to get to the point where people write code that knows what kinds of input it should be receiving and knows to reject anything that doesnt look right?
Paller: I think automation will cut down the number of buffer overflows substantially—automated checking tools. It wont make it perfect, and certainly training programmers is also important, but we dont have a situation right now where we even run all code through these kinds of checkers to see what the buffers are. What I sense in future development is that some automatic program will go through, check the buffer limit and then go back to the programmer for an electronic sign-off on that buffer limit.
Lipner: You just described [Microsofts] PreFix and PreFast processes, Alan.
Coffee: The discussion we just had actually segues beautifully into the next topic I wanted to address, which is the manner in which security threats and vulnerabilities become public knowledge or, for that matter, dont become public knowledge but are circulated on a more private way to people who are in a position to do something about them before general awareness rises. Steve Lipner, I know that Microsoft has taken a position that perhaps that discussion needs to be more carefully limited to avoid vulnerabilities becoming too widely known before remedies are in place. Can you comment on that?
Lipner: Yes, theres been a lot of heat on that issue, and Im going to try to be pretty clear in what I say. There are two separate components. One of them is that, if there is a vulnerability in code, that vulnerability must be fixed promptly, and customers must be given the information they need to protect themselves from it. At the same time, [making widely available] what people call exploit code–basically the mechanisms where I can easily write a script or a program that destroys somebodys system or breaks into their system or displaces their Web site, or what have you– is hard to defend. What weve been trying to do with other partners in industry–a pretty wide range of them–is to reach an agreement around the details of that fundamental set of points. Protect customers by getting the fixes out to them, and protect customers by making it harder to attack them.
Coffee: So the argument that the circulation of exploit code is a socially valuable thing to do because it elevates the pressure on vendors to fix the problems quickly is not, I take it, one that you find persuasive?
Lipner: I cant imagine operating under any more pressure than we are now with or without exploit code.
Coffee: Mary Ann, with the “unbreakable” campaign receiving the attention that it has, the online discussion of attacks or possible attacks against Oracle has certainly increased by several orders of magnitude. How do you assess the current state in the community of discussing and addressing vulnerabilities before they become widespread means of successful attacks?
Davidson: Its interesting. A lot of people use [the term] “hackers” like its a bad word. I think you have to take the long view and realize that theyre doing you a favor and that they almost universally contact us. We ask for [exploit code] because it helps us first validate that, yes, its a problem. It also helps us make sure that we have an appropriate fix.
Coffee: When you say you ask for exploit code, you mean in the same sense that you would ask someone reporting a bug to send in a minimal test case that demonstrates the mechanism of the bug?
Davidson: Absolutely. Its not like Im asking them to write it, but if they have it, please give it to me, and it just makes it that much faster. The issue for us with vulnerabilities–and I think it really is unique to Oracle, because we run on so many platforms– is that we try not to release any alerts until weve finish all patch sets. Weve done as many as 78 patches for one vulnerability, and I think you can realize that we sort of have to train the hackers to be somewhat patient, because we cant do that in four days.
I think we have a pretty good reputation with them. One of the gentlemen who has been reporting vulnerabilities and been vocal about it was in yesterday, and were very happy to work with him. Hes an excellent researcher and very ethical. Its better the devil you know than the one you dont. In the long run, I think these people are doing us a favor by working with us and helping us find these things and giving us a chance to fix them.
Open Source Security
Open Source Security
Coffee: The culture of disclosure and the idea that all bugs should be discussed as completely as possible in the open is certainly part of whats been generally labeled the open source communitys attitude toward development. Ed, Sun recently announced that its going to provide a degree of support for open source operating systems, in addition to its own Solaris operating system, that it previously had not. Do you feel that theres any difference in your ability to deliver and maintain a secure and reliable product based on the open source process, compared with the process by which youve been producing your own branded operating system thats sold into largely similar applications over the years?
Glover: I dont think there will be much of a difference there, no. Its going to be a challenge for us, but I think well be able to provide what our customers need.
Coffee: Mary Ann, Oracle products also run on the Linux operating system. Do you feel theres any difference in the level of security that can be achieved in an open source environment compared with the more traditional way of producing and selling operating systems in other platform software?
Davidson: I think it depends on what open source youre talking about. There was actually a discussion, I think, about some open source security software, and, for us, the issue wasnt so much robustness but things like liability and supportability. Not to go too much off on a tangent, but we use licensed crypto libraries from vendors, and its not that the algorithms arent well known and you cant get open source crypto, but we felt that the fact that there was no third-party indemnification and the fact that we had excellent code and good supportability from vendors, rather than using open source, is why we went with a licensed toolkit.
Coffee: That really elevates an important issue surrounding open source. Its been widely said, quoting Eric Raymond, if I remember correctly, that given enough eyes, all bugs are shallow, and therefore it makes sense to believe that bugs will be found and fixed more quickly in that community. But, at the end of the day, the real question, I think, as Mary Ann has just elevated, is that risk is a business decision and that you want to deal with people with whom you have a business relationship. And you cant have a business relationship with an anonymous community of people who are essentially doing things on a volunteer basis.
Is open source a model in which people can buy and rely upon products, or do people want, if I may be vulgar, someone they can sue if what they get doesnt do what its supposed to do? Is there still a need for traditional identifiable ownership of key software properties so that you can have that kind of relationship?
Davidson: The other issue for us was the cleanliness of the code. We looked at the libraries and said, in the open source equivalent, there were 400 libraries. It wasnt well implemented, particularly. It wasnt bad, but it just wasnt mature. The license libraries we had had been optimized for Oracle since we worked with this vendor, so we had 40 nice, clean libraries that worked on our mission-critical software really well vs. 400 libraries. So there were liability issues.
Even if patches are issued quickly in open source, that can actually be a problem. You dont want to be putting security patches or any kind of patch into your code base every week or two and having it ripple through your entire product stack. Youd like to get nice, clean patch sets that have been tested and regressed at fixed points.
Whos Inspecting the Code
Whos Inspecting the Code?
Lipner: Peter previously quoted Eric Raymond, saying, In many eyes, all bugs are shallow, but the real issue is, are many eyes looking? … You still get these buffer overruns detected in public open source libraries that have been available for inspection and download and modification basically forever. Thats not to say that code inspection doesnt work, but I think what its saying is that, with open source, everybody is relying on somebody else to do the code inspection. We get code inspection done by people paid to do that, and thats a model thats pretty understandable.
Coffee: Let me bring eWEEK Labs Technical Director Jim Rapoza into the conversation at this point. Jim, is security something that will emerge out of the community process that surrounds open source, or is it preferable to buy it from someone and insure against whatever flaws there might have been in that purchased product?
Rapoza: If you look at Apache, that has an excellent security history. I think part of it is how much a product is used. Apache has had bugs in the past, and the open source method has done a very good job of locking it down and securing it. Sure, you can pull out things like BIND and stuff that seem to consistently have problems. Its nice to get that nice, clean patch from a vendor that you know has undergone a lot of regression testing, but theres probably still a relatively high percentage of times that you install that nice patch from a vendor and it causes problems, and you have to revert to an earlier patch or you have to reinstall.
Coffee: One of the things that weve seen in the community of those who sell security is an evolving relationship between with the sellers of business interruption and business continuity insurance, where, in effect, the use of technology becomes part of the best practices that they want to see as a condition of writing an insurance policy against interruption of the business by means of an Internet attack, for example. Steve Trilling, do you think that thats a virtuous cycle that we have emerging here, where the security technology and the people who insure against the residual risk work with each other to help improve the overall risk balance?
Trilling: Thats a little bit of a hard question to answer. Certainly, we think that the security policy inside any organization should be guided by security experts, and not necessarily just by insurers or by legislation, but by people with a lot of experience at securing particular organizations. At the same time, I think there is awareness that, for critical organizations across the world, there needs to be some basic security standards. When we put our money in a bank, we want to know that that bank is securing itself appropriately, so the federal government has raised a lot of issues and has certain laws now regulating financial and banking and health care and other critical industries. Certainly anything that raises the consciousness of security inside corporate environments is a good thing. However, at the same time we want to be careful and make sure that security policy decisions at organizations are generally guided by security experts and security companies, such as Symantec and others.
Coffee: With all of the aspects of security that are quite specific to a particular installation of a product, or installation of a combination of products and so on, do you think we can ever get to the equivalent of an Underwriters Laboratory certification that says, yes, this is a secure product or, no, this is not?
Davidson: We have that–formal security evaluations. Its not just that theyre international standards. The common criterion is an ISO standard, and certainly it is being required. The US federal government, through NSTISSP 11—the National Security Telecommunications Information System Security Policy No. 11cq–says that systems involved in national security have to have independent measures of assurance.
Coffee: Yes, but can that certification have the same meaning to the buyer that theyre accustomed to seeing when, for example, they see a UL label on a piece of electrical equipment?
Davidson: Getting at a major issue that I dont think has been raised, and I certainly dont want to be in a blame-the-victim mode, is one of the general problems that youve had to date: Customers do not make security a purchasing criterion. Its sort of like, “I buy the product because it has all the bells and whistles,” and then they try to see whether its secure or not, or they want it bolted on. And, just as you have to build security into your entire process, you have to make security a part of your purchasing criterion, and you do that in a number of ways. One of the ways you can do that is to look for the seal of approval. Look for an independent evaluation of FIF-140, if its a cryptographic product, a common criteria evaluation. You can also look at the vendors track record in terms of how many security patches theyre issuing. Do they have a long history of maybe not paying attention to security, and are they responsible when they do have to patch things? There are a whole lot of things that you can go through.
Lipner: My iron is a lot simpler an artifact than my laptop computer with the software loaded on it, and there are a lot more ways to use the laptop and misuse it than there are the iron or the toaster. Some of that is incumbent on the user, and some of its incumbent on us to make the default secure and make the installation secure and disable the services that people arent likely to need and so on. But there is still a residual element thats going to be left to the user, and thats going to probably be bigger than with an iron or a toaster.
Coffee: Ed, when people come to you at Sun to buy a server farm thats going to be the foundation of an e-business, do they ask questions that make you guys believe that they are thinking in terms of security as being a differentiator in the choices they make?
Glover: Absolutely. Security is something that is pretty much a question, whether its buying server farms or even any kind of services that we have today. Remember, Im from a professional services standpoint; Im not a product person. When I work with our customers out there, they are expecting security to be inherent in the technology and in the implementation integration, and a lot of times we try to also work with our customers to understand that security and products is just one part of it. Its also the people and process, which is very important, too. Its got to be all three of those together to allow you to achieve the level of security that you want to achieve, given the amount of risks youre willing to accept.
Trilling: If I could make an analogy to reinforce the point that everyone is making, if an organization bought a big physical alarm system for a plant, it wouldnt do them any good unless they had a proper policy in place—who knows the alarm code, who knows how to turn it off and turn it on, and when it goes off, whom do you call? All of those things would be critical in the success of securing any physical environment, and the same kinds of policies apply to securing a cyber environment.
At the same time, organizations would certainly like security to happen by default as much as possible, and one of the things weve tried to do is make it easier and easier for people to get updates. For example, in our consumer antivirus product now, it will check whenever youre online for all of the updates that had been posted at our Web site since the last time you were online, and so users will get them automatically without having to push any buttons. Nevertheless, there is always going to be a crucial human factor in securing any kind of environment, whether its a home or a building or a set of hard drives or a lot of servers.
Glover: Id like to add to that. Ive been in the security business for a long time, and I echo that on the human factor piece, because over the last 20 years, I constantly see the same things over and over again. Were dealing with the same issues. Sure, products are addressing more of the security, and were building more security into it, and people are aware of it. But, as a consultant, I constantly see the same things over again, and it starts with the lack of policies and the lack of understanding of what security needs to be built in. Its always the same story over and over again, so its really a people issue.
Coffee: Things have been difficult enough when weve been able to draw a line around a group of facilities and say, “This is the environment.” Now, as we move to a services paradigm, you wont really have the option of isolating yourself from the network and getting on with your work. Its like deciding not to breathe because there are contaminants in the atmosphere.
People have said that a firewall on your net is like a steel door on your front door, but if the UPS man gets let in to make a delivery and it turns out that what he was delivering had a bomb in it, your front door didnt really do you much good. Were going to have the electronic equivalent of the UPS guy, the pizza guy and every other technician in the world walking in the door and doing things for you as we move to a Web services mode.
Ed, as we talk about the Web being a vehicle for delivering services, what does that do to the security issues?
Glover: It makes it a lot more interesting and challenging for us. The thing with Web services is that we are definitely going to go into it, and hopefully we go into it in such a way where we take some small steps first and we really try to understand what our risks and our vulnerabilities and threats are, and to understand who were working and dealing with in that area. I think that, from a security standpoint, people need to start looking at this at the beginning and not after the fact, and really understanding who their partners are. You might start off in the beginning working with a few partners, people that you already trust and people that you can work with to identify what the threats and vulnerabilities are together. I think this is going to take some time for people to work through this before they really feel comfortable going out, and security is one of the leading drivers in allowing Web services to really become mainstream.
Coffee: Brian, let me turn the question to you. Obviously, making .Net secure has to be regarded as an absolutely crucial starting point in getting the .Net paradigm established as the way people want to write their new applications. Where you do you feel you are on that?
LaMacchia: As you commented earlier about the firewalls having limited protection, if you start letting lots of people through, we have to basically build in security in a number of levels. We build in security in an execution environment, in our case using the .Net Framework and the runtime; we have to build it into the protocols, too.
Today, if youre doing a Web services call, you end up doing that by doing channel-level security and running SSL or maybe doing some client/server authentication back and forth. Thats why were also putting a lot of effort into coming up with and participating with other companies in the security standards for XML Web services.
We start with being able to strongly authenticate XML messages that come out of our XML digital signature standard, which is about to come out of the IETF and W3C as a full standard. We also see that work in XML encryption, XML key management services. These are all fundamental pieces for how were going to pass SOAP messages around that are going to be digitally signed and encrypted to provide security and authentication.
Once we have that, we can base strong authorization decisions off of it so that you know whether that UPS guy at your door really is a UPS deliveryman or is an imposter, and you can make appropriate trust decisions on that basis.
Trilling: With regard to your analogy about the firewall and the UPS guy, one of the things weve really seen is its important for people to install what we call application or Layer 7 firewalls, which will actually inspect the content of the traffic, which is the equivalent of looking under the coat of the UPS guy to make sure he doesnt have a bomb.
In the past, organizations sort of thought, as you point out, “If I put a firewall or antivirus software around my perimeter, Im done.” Now people are opening up their networks not only to customers but to suppliers, to telecommuters and to all kinds of people.
Its not only about blocking everybody out, but letting just the right people in and having appropriate systems in place. That may include firewall and antivirus, as well as intrusion detection systems around your most critical systems to tell you when a break-in is place—a vulnerability assessment software that can proactively tell you where the holes in your system may be, where systems need to be patched in advance of an attack.
All of these are important and crucial decisions that people need to evaluate on the basis of the value that theyre storing on their networks.
What IT Needs to
What IT Needs to Know
Coffee: What do IT people need to understand about security that they dont now, and, given that understanding, what do they need to be sure theyre clearly communicating to non-technical management?
Glover: What I find a lot of times working with the IT people is they feel that theyve got a really good understanding of security and understand what needs to be done. They try to take control of it, and sometimes at the wrong place. They dont really understand that security is something that needs to be more embedded, really, at the company level. It has to be the DNA of the corporation. That is so important–that everybody is responsible for security, not just IT. A lot of times, IT will drive the decision on security, and it should be a business driver.
Its just not getting up to the level where it should, and, as a result, corporations just dont understand the true need of security. When you talk to CIOs about security, they understand its important, but theyre going to come back and say, “What is my return on investment by employing this security?” Thats a really hard question to answer, and I dont think we can answer that today.
Davidson: I would second large elements of that. For us, it really is about having a culture of security, and that would encompass a number of factors. It really has to be a top-down commitment to security in all aspects–and thats physical security, as well as IT security. You dont need to have the National Security Agencys corporate culture of security, but it needs to be appropriate to the business.
Trilling: Security is not an all-or-nothing thing, and it is not the same for every organization. It is an ongoing evolutionary process, and we often say that it is important to secure your environment in such a way that the cost of breaking in is greater than the value of the information that youre storing, and that will be different for everybody. You need a lot more security on a large government building than you need on an empty studio apartment, in just the same way that you need a lot more security in a large corporate network than somebody probably does on their own home personal Web site. So everyone needs to evaluate the information on their business systems and do this in an ongoing way. I think security needs to be evaluated on a regular basis. It is really an ongoing process in which you never get to a point where you are done.
Security really is almost a lifestyle inside the organization. Its not only about installing and deploying the appropriate software, but its also about making sure peoples passwords are appropriately long, that people arent leaving their passwords on sticky notes on their monitors, and, when people take home their laptops, theyre not out surfing the Internet unprotected with sensitive company information on their machines. There are all kinds of cultural aspects to embedding security inside an organization that I think really need to come from the top down.
Lipner: Its not a matter of do one thing and youre protected. Its really a matter of defense in depth, of taking measures across the board, the sorts of things that Steve Trilling was talking about. Then, in terms of communicating to the rest of management, I think its really a very similar message–that this is a process and not simply a point. Security really is something that is an ongoing effort that requires analysis and investment and attention to do right.
Paller: John Gilligan, the CIO at the Air Force, stood up in front of 200 people and said, The vendors have to make a quantum change in the way they deliver this software to us, because cleaning up after their messes is costing us more than it costs us to buy the software, and were willing to pay for it. I think what John discovered that most CIOs dont understand is that when the vendors tell you that a problem is solved, it may mean that somewhere on one of their servers its solved. But the piece of software youre installing may, in fact, be so full of holes that youre vulnerable within moments after you install it.
Also in this Special Report
- Ignorance: The Hackers Best Friend
- Here Be Dragons: Web Services Risks
- Threats to Come
- Trail of Destruction: The History of the Virus
- Community Builds Security: Labs Answers Your Security Questions
- WLAN Hardening Checklist
- Application Hardening Checklist
- Operating System Hardening Tips