Security: Next Steps
If you ask most people to free-associate from the trigger term "September 2001," likely responses might be "World Trade Center" or "terrorists." Only people at the epicenter of an enterprise IT operation are likely to recall, without being reminded, that the week after 9/11 was marked by the worldwide attack of the Nimda wormwhich many now regard as an inflection point in the sophistication and, consequently, the speed and severity of attacks against e-business.
The University of Calgary, in Alberta, Canada, has since compiled estimates of Nimdas impact that include 2.2 million infected machines within 24 hours and a cleanup cost of $539 million.
Thats more than the individual gross domestic products of 15 of the member countries of the International Monetary Fund, not to mention being enough to take every worker in the United States out to Starbucks.
The IT industry has had five years to recognize the significance of such numbers and to make the best practices of enterprise security the norm rather than the exception. But that recognition has remained largely nominal, and the response superficial.
Two years after Nimda, for example, the Slammer worm successfully inflicted a billion dollars worth of nuisance and cleanup. Slammer doubled its number of victims every 8.5 minutes, affecting 90 percent of vulnerable targets worldwide within its first 10 minutes in the wild.
Even two years later, the Sober worm in 2005 may have accounted at times for as much as 70 percent of worldwide e-mail volumesucceeding by taking advantage of laxity in risk assessment and prevention; underinvestment in detection and response; and, all in all, a general lack of vigilance.
By no coincidence, those five elements of securityrisk assessment, problem prevention, attack detection, incident response and creation of a climate of vigilancewere the five sections of a major eWEEK Labs series of articles, titled "Five steps to enterprise security," that was launched in November 2001. Taking no pleasure whatsoever in the continuing relevance of recommendations made five years ago, Labs staff revisit that report in the following pageswith the aim of reiterating whats still critical and also raising consciousness in areas of concern that have emerged or intensified since then.
We hope this update finds a climate of improved awareness and expanded resources for addressing security issues, so that the end of 2011 will find us less tempted to issue a 10th anniversary update to this manifesto for enterprise infrastructure protection.
Next Page: Step 1: Assessment
: Assessment"> Security Step 1: Assessment
By Peter Coffee
In the year that followed the initial publication of eWEEK Labs "Five steps to enterprise security" in November 2001, there was a firestorm of reaction to the accounting abuses revealed in the aftermath of the Enron bankruptcy in December of that same year.
In September 2006, on the fifth anniversary of the 9/11 terrorist attacks, members of eWEEKs Corporate Partner Advisory Board told eWEEK that the impact of those attacks had been far more evident in new physical security measures than in any elevation of the IT security posture. What had most changed their IT environment, said many of the Corporate Partners, was the post-Enron impact of sweeping and pervasive legislation such as the Sarbanes-Oxley Act and of other enterprise governance mandates, along with public awareness of privacy threats and risks of identity theft.
Notably, Californias Security Breach Information Act, aka SB 1386, applies to any company with even a single California customer. It mandates broad notification of any apparent leakage of unencrypted "personal information," defined with admirable specificity as an individuals first name or first initial and last name in combination with any one or more of the following: a Social Security number, drivers license number or ID number, or financial account number in combination with an associated security code or password.
Following its effective date in July 2003, SB 1386 has necessitated costly and embarrassing public acts of self-humiliation by state agencies, financial institutions, retailers, educational institutions and other employers.
The overall effect of SarbOx, SB 1386 and similar measures in other states is that inattention to enterprise security now has a clearly marked price tag for senior managers. If IT security is lax, theyll spend a lot of money admitting it. If they dont come clean about what they do and how, they could wind up facing fines or even jail time. Either way, their stock options will take a beating as the market punishes a tarnished public image with share price markdowns.
Infrastructure risk assessment is no longer an unpopular exercise in looking for leaks in the roof when the sun is shining. Its now understood as an essential element of due diligence, a piece of the practice of being a going concern that wants to stay that way. The resources made available and the respect and consideration for the people doing the work appear to have markedly changed for the better in the past five years.
So easy to do it so wrong
That said, there are uncountable ways to enact the form of risk assessment and abatement while utterly failing to deliver the substance. Pre-9/11 and pre-Enron, the stakeholders for IT security were primarily internal-operators who needed to ensure facility uptime and in-house users who needed access to applications and confidence in the quality and protection of data. The environment now demands far more attention to external stakeholders and scrutineers.
This change has taken place rather quickly, even by the standards of the fast-paced IT sector: Technology-centered professionals still are all too likely to be thinking in terms of IT asset protection, ensuring that servers are not taken down and applications are not taken out of service, while failing to appreciate the data-centered and business process-centered viewpoints that the worlds top spooks now urge upon security practitioners as the best vantage points for assessment.
In the post-9/11 world, the name of the National Security Agency can actually be mentioned in a mainstream conversation without triggering nervous jokes about "No Such Agency" (its former nickname) or the "Maryland Procurement Office" (the NSAs innocuous nom de guerre for purchasing its plethora of high-tech tools). Indeed, the NSA has come into the public spotlight as a center of excellence for security technique: Frameworks such as IAM-whose dueling spellouts are the NSA Information Assurance Methodology and Infosec Assessment Methodology-are widely published and taught.
IAM includes a notion of "impact attributes" that enterprise professionals will do well to embrace and understand. Core impact attributes often are listed as confidentiality, integrity and availability, all of which are essential to anything deserving the name of a secure infrastructure. In fact, these attributes represent a three-legged stool that will fall if any of those legs is unsound.
Brace for impact
Specific organizations and missions may have other infosec attributes that must be achieved and preserved. Security Horizon has produced an NSA IAM guide (released by Syngress Publishing in 2004 as a book titled "Security Assessment"). The guide suggests such possibly relevant attributes (given here with eWEEK Labs suggested definitions) as:
Rather than offering a predefined list, however, the authors of that guide are making the more general point that stakeholders should be asking questions about the types of information that are important to an organization; they should annotate the resulting list with the characteristics that define their own mission-specific, type-specific standards of what it means for that organizations information needs to be securely met.
Any idiot can conjure up a threat level or risk index or other gross measure of a security situation. Absent detailed discussion, not at the level of IT assets but at the level of key processes and the impact attributes that go with them, gross measures are worse than useless-they convey a false sense of knowing whats going on.
Enterprise managers need something better, and infrastructure professionals have never had better leverage to pry loose the resources needed to provide it.
Best practices: Assessment
Next Page: Step 2: Prevention
: Prevention"> Security Step 2: Prevention
In 2001, eWEEK Labs focused security prevention advice on the need to harden outward-facing systems, particularly against external attacks. Six years later, protecting against external threats is still a priority, but administrators now must also look inward to reduce the risks posed by a companys own users.
Indeed, the threat landscape has changed significantly since 2001. Largely gone are high-profile worms crafted to make plenty of noise and cause outages that make way for stealthier attacks. Theyve been replaced instead by increasingly sophisticated and targeted attacks designed to steal data for financial gain. Attackers have found deceiving users to be a highly effective way to establish a foothold on a network, either for use as a way station for further attacks or for outright data theft.
Many organizations have likewise stepped up their user security training, but its still all too easy to make a bad choice-whether it is opening an innocuous-looking attachment, installing some innocent-looking but nefarious software widget or even simply clicking on the wrong link.
The real key to security prevention, therefore, is minimizing the amount of damage that unwitting users can do if and when they inevitably make a bad decision-specifically, by denying the user administrative control over the local system.
You can see the value of this strategy by looking closely at Microsofts December 2006 patch smorgasbord. The vulnerability details for the three critical updates (one each for Internet Explorer, Visual Studio 2005 and Windows Media Player) state the following: "An attacker who successfully exploited this vulnerability could gain the same user rights as the local user. Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights." This message is very common with Microsoft vulnerabilities.
While there are certainly tactics and exploits out there that an attacker could utilize to escalate privilege on a compromised host, limiting a users rights at least would make the process more difficult. Indeed, companies should now be considered negligent for unnecessarily allowing users local admin rights.
Microsoft has clearly recognized this. Windows Vistas UAC (User Account Control) feature strives to limit user privilege out of the box, requiring user approval or additional authentication to perform tasks deemed potentially risky to the system. This kind of system lockdown is also possible with Windows XP- and Windows 2000-based clients.
Of course, the most difficult part of adopting the LUA (Least-Privileged User Account) philosophy is getting poorly written applications to work properly with limited rights, but tools such as Microsofts Standard User Analyzer help identify where applications will run afoul of LUA. Other tools are available to then help administrators adjust credentials in a targeted fashion, increasing privilege levels only when and where absolutely necessary. BeyondTrusts Privilege Manager is just one such solution.
Of course, an attacker with user privileges is still a problem, as an established beachhead can be used to launch additional attacks. Companies must therefore continue to maintain-and even streamline-an effective patch strategy that encompasses not only the operating system but also third-party applications and drivers.
Patching is a now a reactive process, however, as the relationship between vulnerability, exploit and patch has been irrevocably altered within the last year. Administrators may have grown accustomed to the following standard "procedure": Microsoft releases a patch, thereby documenting a vulnerability; exploit code is then reverse-engineered from the patch and released into the wild, leaving a scant few days to test and deploy the patch.
But with the increasing amount of research into the security of Microsoft products-research that is performed by people wearing both white and black hats-vulnerability details and exploit code often are found in the wild well before an actual patch is released. This effectively eliminates the patch window, so administrators must work hard to streamline patch testing and deployment processes.
It is also now incumbent upon administrators to track and monitor lists of known, unpatched vulnerabilities (such as those at research.eeye.com/html/alerts/zeroday/index.html). Admins must evaluate the potential impact of these vulnerabilities to the network and weigh the costs and benefits of deploying temporary workarounds.
Loss of mobile data also remains a major concern, as weve seen from the numerous accounts of lost and stolen laptops that contained thousands or millions of personal records. The use of encryption products (for example, Check Point Software Technologies Pointsec or Utimaco Safewares SafeGuard portfolio) remains an obvious way to deal with this threat, securing files, folders or entire volumes from prying eyes when a device falls into the wrong hands.
But encryption is a workaround that ignores larger, systemic questions: Do the potential productivity gains from granting employees anywhere, anytime access to sensitive data outweigh the risks of this access? And, if so, is there a better way to manage the outflow of this information?
We feel a preferred alternate strategy would be to invest in data access through tightly secured Web services with control over who can access what from where. Network connectivity still has a way to go before it is ubiquitous enough to make this scenario a reality, but it should be a goal to have only a few core entry points to critical data.
Best practices: Prevention
Next Page: Step 3: Detection
: Detection"> Security Step 3: Detection
The recent revelation that a database at the University of California, Los Angeles, had been hacked was bad enough. But the fact that 800,000-plus identities were made vulnerable over a period of 13 months shows that all the detection advances in the world wont work if they arent implemented.
Technicians at UCLA noticed the unusual traffic patterns that revealed the breach-which made the names, addresses, Social Security numbers and birth dates of UCLA students and staff vulnerable-on Nov. 21, more than a year after the hack had occurred. Whats maddening-and mystifying-is that a whole class of network anomaly detection and data leak prevention tools, not to mention a new generation of network and host intrusion detection and prevention tools, were available during the time the UCLA data was being mined for who-knows-what purpose.
In the Detection portion of our 2001 "Five steps to enterprise security" series, we said that detecting network attacks was as much an art as a science. This is still true, but the science has greatly improved, driven in part by regulations that have forced some organizations to pay attention to how personal data is stored, used and transmitted.
However, spectacular data breaches, such as the one at UCLA, still leave the impression that personal data isnt valuable enough to warrant more than casual oversight at some organizations. While regulation seems to be the least effective method of protecting private information, it appears that self-imposed rules have an even more dismal track record.
This ugly truth was revealed most recently by the Department of Veterans Affairs. In May 2006, the VA enabled the monumental theft of 26 million personal records from a laptop. According to assurances from the FBI, the theft of the laptop from a private home was aimed at the laptop and not the information the laptop contained. Nonetheless, detection tools available at the time of the loss could have alerted managers to the concentration of this valuable data on a mobile device. In fact, systems available at the time could have prevented the unauthorized movement of this data onto the laptop.
In some cases, technology cant stop the malicious use of personal data. This was exemplified by the loss of 145,000 personal records that ChoicePoint unwittingly sold to impostors in 2005. In that case, weak screening methods allowed the impostors to pose as ChoicePoint customers to get their hands on the goods. Even so, leak control software likely would have added a layer of authentication checking to the business process.
In fact, since our 2001 report, a whole class of data leak prevention tools-which detect out-of-policy data use to flag or prevent improper data movement-have emerged. Tools from Vontu, GTB Technologies, Verdasys, Reconnex, Tablus and Vericept-to name just a few vendors-all work on the basic principle of detecting and blocking unpermitted data use even when the user is an employee. Most of these vendors products are built for regulated organizations-usually in the financial services or health care industries.
UCLA, which obviously stores a huge amount of sensitive and personal data, likely wasnt using a leak prevention tool, to the detriment of its students, faculty and staff. Thats too bad. But while data losses such as those reported at UCLA, the VA and ChoicePoint make headlines, its safe to say detection systems have prevented far greater losses.
And this points to a huge challenge for IT managers responsible for security-how to accurately state what malicious activity has been prevented through prudent action.
And this is where the art-or perhaps politics-of detection comes into play. To demonstrate that detection works, reports must be created and used in such a way that non-IT managers can use them as well.
In some cases, trial use of programs that control user access-such as single sign-on tools-can graphically show failed attempts to access data. Reports generated by trial implementations of leak prevention tools are even better at showing the aversion of nefarious activity.
On this front, our recommendation has changed from the one we made in 2001, when we focused on vulnerability assessment and penetration testing. Based on our experience since then, we now recommend that IT managers put authorized use at the center stage of the security architecture and use risk assessment to determine how to protect valuable data and systems. IT managers must know what is authorized and acceptable use of data and systems and detect all activity that falls outside these bounds.
In fact, nearly all of the security products eWEEK Labs tests-especially those that make the breathless claim of being able to stop zero-day attacks with no prior knowledge of the exploited vulnerability-operate by setting a base line of known good behavior.
However, detecting out-of-bounds data and system use isnt enough to keep security systems in the good graces of upper management. Inflexible systems that cant accommodate, say, traffic spikes at the end of the month or the easy addition of new application traffic are barriers to productivity. To this end, we see products that allow IT technicians associated with business units to create policies as being an essential part of implementing detection tools.
Our tests of leak detection tools, in particular, have focused on the ability to let authorized users make changes to monitoring functions. IT managers evaluating these types of tools should put this functionality atop their list of essential features.
Indeed, when it comes to detection, there is no substitute for an intimate knowledge of what traffic should be traveling the organizations network and what data and transactions are needed to carry on business.
Although vendors of detection tools often emphasize the simplicity of installation and integration of their products into the network, the simple fact is that unless a human being evaluates the traffic and usage patterns revealed by these tools, malicious activity can go undetected.
Finally, change management procedures can go a long way toward reducing the false-positive readings often associated with detection tools.
Best practices: Detection
Next Page: Step 4: Response
: Response"> Security Step 4: Response
By Jim Rapoza
In 2001, Nimda and Code Red were the evil forces to be reckoned with. Today? They seem almost quaint in the face of malware such as rootkits.
What did Code Red do that made it so horrifying to IT administrators? Basically, it defaced Web site home pages. Even Nimda, which was pretty destructive for its time, pales in comparison to rootkits, the main danger that security and IT administrators face today.
Indeed, if recovering from Nimda and Code Red was like cleaning up after some rowdy neighborhood kids had egged your house, then finding out that your business has been successfully compromised by a rootkit is like finding out that your identity has been stolen and that the thief has bugged your house and your phone lines and has had full run of your house when you werent home.
So how should IT managers respond when they find that a rootkit has turned company systems into its own personal playground? Unfortunately, the best advice often is that which was given a company that was the subject of eWEEK Labs "Anatomy of a rootkit hack": Nuke it from space. In other words, take down the system on which the rootkit has been implanted and rebuild it from scratch. (The company eWEEK Labs profiled chose, not surprisingly, to remain anonymous.)
But while it is possible to take down and rebuild a single system or server that has been infected with a rootkit, this usually isnt an option when the rootkit has had access to a number of vital company servers, systems and resources. Just as being infected with a rootkit is like having your identity stolen, the response is also similar in many ways: Everything that touched the infected system in any way, shape or form has to be considered suspect. And businesses will need to watch carefully for months, if not years, to make sure that there are no hidden or remaining effects from the rootkit invasion.
When to pull the plug
With most standard system in-fections, the first step once a problem has been detected is to pull the plug-literally. However, while this works fine when one system is involved, how do you pull the plug on an entire network? If the network in question is an internal corporate segment, then you should pull the plug on the entire segment. While this will cause a user outcry, it is vital to disconnect the affected systems from the Internet.
When it comes to resources that cant be shut down-such as network segments including externally facing Web, database and application servers-it may be necessary to do what the company in the "Anatomy of a rootkit hack" article did: Intentionally poison your own DNS (Domain Name System) tables. This will mislead rootkit controllers about the location of affected systems.
Once all potentially infected systems are isolated, you will need to find and remove the rootkit itself. Standard applications, such as anti-virus tools, will help here. However, you also should use specific rootkit detection programs, such as Microsofts Windows Sysinternals RootkitRevealer, that use cross-detection techniques to find rootkit-caused changes in a system.
At this point, it is vital to trace any and all activity related to the rootkit infection. Indeed, the response should be "all hands on deck" for the company IT staff. Everything that could have been touched or seen by the rootkit-infected system needs to be checked, and all activity on the infected system needs to be studied, starting from the time of infection.
The IT executive at the company profiled in the "Anatomy of a rootkit hack" did just that and found, to his dismay, that an IT staffer had used a domain administrator name and password on the rootkit- and keylogger-infected system. That security error gave attackers the keys to pretty much everything in that particular enterprise kingdom.
The next step is to tap into your deepest, darkest fears. Imagine the shady underworld characters who now have detailed information on all your most vital passwords and access mechanisms, and what they can do with this info.
Then, change everything: all passwords, user accounts, authentication systems-anything that could have been scanned or accessed by infected systems.
If youve been thinking about upgrading your security, network and server infrastructure, you might as well do it now. Its a lot of work, but if the bad guys have even one password that still works, you could be going through this whole process again before you know it.
The final step is to try to stop a rootkit infection from ever happening again. Its true there are some rootkits that are so sophisticated that they will evade all your security and anti-virus systems. But, in the majority of cases where a rootkit spreads, someone messed up along the way. Perhaps a user downloaded non-work-related programs to his or her corporate system. Or there were users who didnt follow good security practices and opened unexpected attachments in e-mail.
A rootkit infection, and all the turmoil it causes, is a good opportunity to reiterate (or iterate, if you havent already) the importance of good security practices. Of course, there also may be a need to educate IT staff.
But, in the end, theres no rest for the security weary. Like people who have been subject to identity theft, victims of rootkit infections can never be 100 percent sure that they got everything-that there isnt a little Trojan or another rootkit quietly hiding somewhere, waiting to strike again when the IT staffs guard is down.
The only effective response is continuous vigilance.
Best practices: Response
Next Page: Step 5: Vigilance
: Vigilance"> Security Step 5: Vigilance
By Peter Coffee
During the past five years, the standard of what constitutes due care for maintaining an enterprise security posture has risen almost beyond recognition. It can be difficult to find good estimates of the associated costs, since organizations are understandably loath to discuss in detail their security efforts or their spending thereon. What seems likely, though, is that many of the widely reported costs that are laid at the door of Sarbanes-Oxley Act compliance would arguably have been incurred much earlier if a disciplined security framework had been constructed before it became a SarbOx compliance prerequisite.
Estimates of SarbOx compliance costs may therefore serve as something of a proxy for more general security costs, and the levels and trends of those compliance costs are staggering. A survey of corporate board members conducted in 2004 by RHR International and The Directorship Search Group found an estimated average annual cost of $16 million for compliance with SarbOx-with some companies, such as top-tier insurer AIG, reporting almost 20 times that figure. And these arent merely startup costs-there are indications of ongoing comparable expense.
In some cases, sad to say, appropriate diligence has metastasized into obsession-as when a passion for preserving the confidentiality of directorial discussions at Hewlett-Packard led to last years devastating "pretexting" scandal that stripped the company of key managers, officers and directors. Even a well-conceived security strategy can be executed to excess.
That said, most organizations rightly suspect they have yet to reach the level of "good enough," let alone any fears of going too far. A survey conducted last year by ControlPath, a developer of automated compliance management solutions, found only 28 percent of organizations expressing confidence that they were entirely in compliance with regulations affecting their process governance. Moreover, merely meeting legislative or regulatory mandates is not enough to let the well-informed IT professional sleep soundly. More is required.
Cultures of carelessness
Any long-term progress in elevating enterprise security will have to be an achievement of making a culture swim upstream against the currents of evolving technology. Like trends in real-world weaponry that favor the insurgent over the conventional armed force, the IT worlds trends in processing, connectivity and storage pave the way for both intentional and merely careless leakage or abuse.
Only organizational buy-in to the relevance of security awareness and to the appropriateness and necessity of broad participation in the security process can overcome adverse technology trends.
In processing: The "Deep Crack" machine, built by the Electronic Frontier Foundation to demonstrate feasible brute-force attacks on the DES (Data Encryption Standard) algorithm, cost $250,000 when constructed in 1998. A comparably powerful system could probably be built today for well under $10,000, or a parallel algorithm could be devised and executed on Sun Microsystems public grid (www.network.com) for $1 per CPU hour.
In connectivity: The term "war driving," for opportunistic location and disclosure of unsecured wireless network access points, was coined in 2001. Today, a handheld detector less than 3 inches square-selling for less than $60-can show on its LCD readout the SSID (service set identifier), signal strength, encryption status and channel assignment of any Wi-Fi access point within range. Maverick unsecured departmental Wi-Fi setups have never been easier for a parking-lot snooper to find and use as entry points into a network.
In storage: Formerly too small to be dangerous, the capacity of USB thumb drives has exploded to the point that 1GB devices have followed Wi-Fi detectors down through the $60 price floor-and may come in unexpected forms, such as the back end of a ballpoint pen or a foldout element on what looks like a Swiss Army knife (along with other useful geek tools).
The pervasive threat of USB storage devices was dramatically demonstrated when several were seized in a New Mexico drug raid in October. The devices turned out to contain what appeared to be classified files from the Los Alamos National Laboratory, with an apparent connection between the accused drug dealer and a Los Alamos contract employee.
A pre-9/11 viewpoint might envision security attacks as expensive and complex, requiring some combination of exotic or conspicuous equipment and unusual expertise. In that environment, detecting and precluding the unusual and unacceptable was a sufficient strategy of vigilance.
Post-9/11 reality is that tools of attack, and the knowledge and skills required to use them, are in many cases common and in other cases trivially easy to obtain-such as when news surfaced in September that a simple Google search was enough to open the master-password back door into a widely installed model of cash machine to make it disgorge $20 bills while counting them as if they were worth only $5 each. There are too many opportunities like this, and too many ways for them to be discovered and shared. For example, exploits aimed at Microsofts Windows Vista operating system went on sale at $50,000 per revelation toward the end of 2006.
Its therefore necessary to switch the approach to vigilance from denying the forbidden to a far more disciplined model of defining and permitting only whats meant to be allowed.
Implementing that culture is a process that some technologies can assist. Last year, McAfee acquired Preventsys, adding the latter companys expertise in wireless network analysis and automated audits to its own portfolio of policy-driven tools such as Hercules. Hercules became a McAfee product with McAfees acquisition of Citadel Security Software.
Proactive design of useful and necessary business processes, identification of the data and the privileges needed to carry them out, and instrumentation of systems to detect any violation of those boundaries are the techniques that will succeed.
Best practices: Vigilance