In this series on enterprise security, eWeek Labs has so far explored the elements of assessment, prevention, detection and response, each of which requires a variety of tools, products and services. In this concluding segment, we focus on the strategies that can quietly prevent a world of security threats from becoming costly and disruptive breaches.
When driven by top-down commitment from enterprise management, the resulting culture of vigilance will leave fewer vulnerabilities to find, repair and monitor and will drastically reduce the costs that arise during and after successful attacks. Vigilance is the longest-term—but highest-payback—element in an IT security strategy, propelled by consistent attention to the principles explored below.
Risk Assessment
Risks differ greatly in how and where they arise, what they affect, and how they can be reduced. Its therefore important to differentiate the types of risk and the types of consequence likely to occur. That same diversity complicates the challenge of coordinating an enterprise security posture, as noted in the creation of the federal Office of Homeland Security after the attacks of Sept. 11.
“The Computer Security Handbook,” by Arthur Hutt et al., categorizes risks as: physical hazard, equipment malfunction, software malfunction, human error, data misuse and data loss.
Responsibility for these varied risks may be given to physical plant management, IT support staff, application developers, human resources and training staff, and forensic accountants, respectively. Rare is the enterprise in which any single executive, even at the CxO level, has integrated knowledge—let alone expertise—in all these domains, but failure in any neutralizes efforts in all.
Charles Pfleeger, master security architect at Exodus Communications Inc., in Santa Clara, Calif., classifies possible consequences of an IT breach as: interruption (loss of access to an asset), interception (unauthorized access to an asset), modification (alteration of an asset) and fabrication (creation of spurious “assets” such as false transactions).
Hutt and his co-authors suggest a further breakdown among disaster (prolonged consequence), solid failure (requiring temporary cessation of use to repair) and transient failure (temporary and/or irregular in occurrence).
These labels are not academic exercises but rather identify different situations that call for different preparations. For example, depending on the business situation, the threshold of “disaster” may be days or merely hours, and business arrangements—such as network monitoring contracts, backup and restoration response times, quality-of-service agreements, and contingency staffing plans—must all reflect these specifics. Insurance policies should also delineate precisely the kinds of coverage given for various types of damage or disruption.
No Rest
The sad truth is, the task of securing an IT system can never be complete. As Bruce Schneier, chief technology officer of Counterpane Internet Security Inc., warned in his book, “Secrets and Lies,” IT systems have four devastating properties that combine to make vigilance a permanent concern: Enterprise-scale systems are complex, interactive, emergent with unpredictable behaviors and, unfortunately, bug-ridden. eWeek Labs would add to Schneiers list a fifth horseman, so to speak, which is that systems today are actively threatened, compounding the hazards created by the other four characteristics.
But one of the strongest weapons against many IT threats is the growing awareness of security issues among even casual IT users. “When a cab driver asks me what I do and I say Internet security, we can have a meaningful conversation,” said Alex van Someren, CEO at nCipher Corp. Ltd., in Woburn, Mass. “Its gone mainstream.”
At the same time, however, van Someren warns that users awareness does not translate into comprehension—or even interest—in technical details. Therefore, the challenge for security service providers, for security product vendors and for enterprise general managers is to translate users awareness into meaningful behavior change. This can best be done by positive methods, rather than relying solely on penalties for policy violations.
Ed Glover, director of enterprise security and customer engineering for Sun Professional Services, which is also in Santa Clara, offers the example of his own companys measures for promoting physical site security: “We would have security people try to piggyback in through the doors, without their badges, to see if people would try to stop them; if you did, you got a gift certificate for dinner.”
Glover added, “We have fun, but were all responsible for the assets of this company from both a physical and a logical standpoint. Were constantly being reinforced on what our responsibilities are to protect Suns assets.”
The combined effects of clear communication and positive reinforcement of good security performance will go much further than draconian threats and security measures that actually impede peoples work.
In the long run, Glover said, shared responsibility for security has to be “built into the DNA of the company.”
Vigilance From Day One
When security is added as an afterthought, weaknesses remain that would not have been there in the first place if security had been a more pervasive concern.
For example, a networked application might use certain communication parameters based on the size of application data structures without considering the desirability of encrypting those data streams between widely separated nodes. Adding encryption overheads later might require costly redesign and delay the deployment of a business-critical application.
When choices such as this arise, deployment of insecure systems is the likely result. The point is this: Security orientation should be present at every stage of project development. A life cycle approach, rather than a reactive response, is needed.
Adaptive Design
When security is perceived as the job of auxiliary subsystems, wrapping around the IT core, the result can perversely reduce the business contribution of IT by inhibiting system availability. “Complexity has created what are now, overall, more brittle systems, because they werent designed to work together,” said Robert Morris, director of the IBM Almaden Research Center, in San Jose, Calif.
The resulting risks can be astonishingly obvious, after the fact: “For example, your admin has the only password for the key file—and falls under a bus,” said nCiphers van Someren. “The problem is severe unless the admin has been bad and written down the password. Instead, you should be using a system that shares responsibility.”
Algorithms that let any three of five trusted people agree to access a resource, for example, are well-known by the name of “threshold schemes” (as described by Adi Shamirs 1979 paper “How to Share a Secret”). However, these methods require analysis of the business value of the asset and the process implications of authorization sharing before they can be properly used.
Direct security issues aside, said IBMs Morris, “operator errors are a significant proportion of all failures, so its essential to make the system communicate with the user and make it easy for the human to impose priorities on the system.”
When a security breach is handled by isolating a system from the network, as recommended in our previous segment on response, the business needs to go on. If fault-tolerance actions, such as delegation of critical tasks to other servers, are easy to identify and command, operators will be less tempted to keep a corrupted system online while they try to fix the problem on the fly.
System management tools, therefore, can be important points of enterprise security leverage if they make it easier for operators to understand their options and to choose the actions that least disrupt operations.
“When a set of alerts begins to occur,” Morris said, “it doesnt help if I start getting messages saying that processor 33 in branch office 7 is having errors. I need a message saying what application is in trouble: check printing may be delayed. I need to be able to say, A purchase transaction should always take priority over a routine report.”
Error-Prone Environments
System administrators can achieve deniability for incidents by installing layer after layer of protection, but theyre not really doing their jobs if the result is an error-prone environment.
Indeed, its not the job of IT administrators to deploy every available security tool; its their job to assess the balance between degree of protection on the one hand and likelihood of consistent and correct use of systems on the other.
“Isnt that the important message about security?” asked nCiphers van Someren. “Practical rollout of appropriate security is what the world really needs—not better/faster/stronger algorithms but better ways of ensuring that what we have is made more usable.”
Technology Editor Peter Coffee can be reached at peter_coffee@ziffdavis.com.