In part 2 of eWeek Labs Special Series on enterprise security, we examine prevention—how to structure IT systems to minimize the risk of security compromises.
Simply put, prevention is the bedrock of security. However, prevention and continuing vigilance (which well cover in Part 5 of this series) are also the two most time-consuming tasks in deploying secure systems.
Theres no way around this hard fact: The fundamentals of building secure systems boil down to an intimate knowledge of how deployed software and hardware work, combined with meticulous attention to how systems and groups of systems might fail given the right attack.
Good security is ultimately not a product because technical details are always changing; rather, its a process and a mind-set. Thus, an organizations top security assets are the right people on duty.
Minimize System Risk
The first major principle of preventing intrusions is to minimize risk by making it harder to crack into existing systems.
The first step in doing this is to shrink the problem domain—cutting down on the number of systems that need to be secured. Otherwise, its just too big a problem.
A security assessment is a key part of this process. Properly done, a security assessment should tell you what and where your organizations most valuable data is and where the organizations most likely points of attack are. (See last weeks installment for advice on assessing security vulnerabilities, at www.eweek.com/links.)
With assessment results in hand, you must install all available updates—but only after all needed components are installed, so update agents will download the right patches. This is tricky to do safely because systems are highly vulnerable when freshly installed, a point illustrated only too well in our testing when Microsoft Corp.s IIS (Internet Information Services) was infected multiple times during the period required to download Windows 2000 Service Pack 2.
The second step in minimizing risk is to start trimming fat from the systems that matter.
Cut deep, leaving only enough functionality for critical systems to work and not a bit more. Are there extra libraries installed that enable additional functions? Remove them. Are sample files or code installed? Ditto.
Adherence to this minimal footprint is a big part of what makes systems like the OpenBSD development teams OpenBSD and the Apache Software Foundation Inc.s Apache HTTP server so secure out of the (electronic) box.
Even something as innocent as documentation can be an unexpected avenue for attack. We discovered that firsthand in our second Openhack security test when an attacker discovered a new vulnerability in Sun Microsystems Inc.s Solaris AnswerBook2. (See story at www.eweek.com/links.)
This is where deep system expertise about which components are actually essential—combined with the patience to test and retest ever-smaller configurations—is invaluable. Top staffers need to be assigned to this job.
The third step is to change system defaults. Attackers infer knowledge about attacked systems based on their own copies of the same software.
The most obvious element of this task is changing all passwords for known user names, especially administrative users. Other dangerous defaults include default file paths, default permissions on system files (which are frequently too trusting), the account under which software runs (which may unnecessarily have administrative permissions) and default application settings.
An eWeek reader, an IIS administrator, recently provided an example of the fruits of default- changing labor: The readers organization was infected by the sadmind/IIS worm, but its Web pages were not defaced because he had put Web site files into nondefault locations.
The fourth step to hardening systems: Install server- or client-side tools that actively work to block anomalous behavior, on the principle that it might be harmful. Anti-virus software, local network firewalls, application firewalls and trusted operating systems such as Argus Systems Group Inc.s PitBull or Suns Trusted Solaris all apply this principle. (See eWeek Labs reviews of several IIS application firewalls online at www.eweek.com/links.)
The fifth step in the hardening process is to do final predeployment penetration tests (at least a port scan and a vulnerability scan) and functionality tests to double-check that nothing obvious was missed and that the system can do the job it needs to do.
Although not part of prevention, its also important to change settings that will later help with detection and security response by enabling system logging and auditing features. IT administrators should also implement egress filtering to limit the relay of DoS (denial-of-service) attacks. Egress filtering ensures that all outgoing IP traffic has a reasonably correct source IP address.
If all these steps are performed correctly, you have a hardened system that will defeat all anticipated attacks and, hopefully, most unanticipated ones.
Document all changes. Servers or clients can now be imaged for distribution.
Plan for Failure
The next principle in preventing intrusions is to design for failure. The hard truth is that all systems are vulnerable, whether through highly skilled outside attackers, an accidental misconfiguration, a momentary lapse of attention or an internal attack.
And when planning for failure, the level of protection should match the value of the assets being protected.
Therefore, the most sensitive or most irreplaceable data should logically be as “far” away from weak points in an IT infrastructure as possible. Servers that can be accessed by many people or that run many services are considered weak points, no matter how well these servers have been secured.
Reliable security systems use layered barriers that are independent of one another, so when the first fails, the second or the third will stay standing. This approach introduces system complexity and increases costs, so it needs to be applied judiciously.
For example, internal client systems are usually not that valuable and so should be protected only from the most common form of client attack: e-mail viruses. End-user education about safe e-mail practices, combined with server-side e-mail filtering or client-side anti-virus software, is adequate protection for these systems.
Laying down protection for a server containing something as sensitive as customer credit card data is a very different story, of course. To secure such a server, we recommend the following practical steps, which can be applied to any server hosting sensitive information:
• A network firewall performs a first layer of protection, filtering network traffic down to a small set of protocols (limiting external network attacks to services running on these ports).
• A local server firewall duplicates and tightens these protections down to HTTP and HTTPS (HTTP using Secure Sockets Layer) protocols only, plus needed database and management protocols (limiting internal network attacks).
• Next, harden the server so the only server programs running are the Web server and application server (if the two are on the same server). This again will limit network attacks in case of an error in firewall configuration.
• Make sure the user names used by server applications do not have access permissions to the tables storing credit card data in the database, just to a database stored procedure that can supply only the last four digits of a credit card number for a particular supplied user name. This protects against application attacks.
• Finally, encrypt the credit card data itself using database encryption, which will protect against file system attacks.
This is a complicated design, but it safeguards key data against a wide variety of remote and local attacks and can tolerate multiple security failures with little to no system compromise.
Secure Design Practices
The third major principle in preventing intrusions is to implement secure programming practices. Any application that accepts user input is a potential security risk, and externally facing dynamic Web applications are especially high-risk deployments.
Custom applications must be paranoid about checking user-supplied data for safety: They should be programmed to check for unsafe input such as quotes, SQL strings or browser scripting commands, possible null strings, and extra passed URL parameters. They should also check for poor handling of bad input such as buffer overflows or uncaught exceptions.
Make Contingency Plans
Part of prevention is ensuring minimal operational disruption should a successful break-in occur. Regular backups allow individual destroyed or corrupted files to be restored, provide a way to track changes made to key system files, and are a quick way to roll systems back to “good” configurations.
We also recommend using system file integrity checkers such as Tripwire Inc.s Tripwire so youll know what changes crackers made should a system be penetrated.
Business Process Changes
The final steps in a prevention strategy are to teach and mandate business process changes that promote safe computing.
It should be made perfectly clear to IT staff and end users what is acceptable behavior, who has access to secured resources and how to respond to a possible compromise.
Remember, security is about much more than software: The tightest software configuration cant protect a server if a determined attacker can don a maintenance uniform, walk into the server room and sit down at a logged-in console.
West Coast Technical Director Timothy Dyck can be reached at timothy_dyck@ziffdavis.com.