Step 3

 
 
By Peter Coffee  |  Posted 2007-01-08 Email Print this article Print
 
 
 
 
 
 
 


: Detection"> Security Step 3: Detection By Cameron Sturdevant The recent revelation that a database at the University of California, Los Angeles, had been hacked was bad enough. But the fact that 800,000-plus identities were made vulnerable over a period of 13 months shows that all the detection advances in the world wont work if they arent implemented.
Technicians at UCLA noticed the unusual traffic patterns that revealed the breach-which made the names, addresses, Social Security numbers and birth dates of UCLA students and staff vulnerable-on Nov. 21, more than a year after the hack had occurred. Whats maddening-and mystifying-is that a whole class of network anomaly detection and data leak prevention tools, not to mention a new generation of network and host intrusion detection and prevention tools, were available during the time the UCLA data was being mined for who-knows-what purpose.
In the Detection portion of our 2001 "Five steps to enterprise security" series, we said that detecting network attacks was as much an art as a science. This is still true, but the science has greatly improved, driven in part by regulations that have forced some organizations to pay attention to how personal data is stored, used and transmitted. However, spectacular data breaches, such as the one at UCLA, still leave the impression that personal data isnt valuable enough to warrant more than casual oversight at some organizations. While regulation seems to be the least effective method of protecting private information, it appears that self-imposed rules have an even more dismal track record.
This ugly truth was revealed most recently by the Department of Veterans Affairs. In May 2006, the VA enabled the monumental theft of 26 million personal records from a laptop. According to assurances from the FBI, the theft of the laptop from a private home was aimed at the laptop and not the information the laptop contained. Nonetheless, detection tools available at the time of the loss could have alerted managers to the concentration of this valuable data on a mobile device. In fact, systems available at the time could have prevented the unauthorized movement of this data onto the laptop. In some cases, technology cant stop the malicious use of personal data. This was exemplified by the loss of 145,000 personal records that ChoicePoint unwittingly sold to impostors in 2005. In that case, weak screening methods allowed the impostors to pose as ChoicePoint customers to get their hands on the goods. Even so, leak control software likely would have added a layer of authentication checking to the business process. In fact, since our 2001 report, a whole class of data leak prevention tools-which detect out-of-policy data use to flag or prevent improper data movement-have emerged. Tools from Vontu, GTB Technologies, Verdasys, Reconnex, Tablus and Vericept-to name just a few vendors-all work on the basic principle of detecting and blocking unpermitted data use even when the user is an employee. Most of these vendors products are built for regulated organizations-usually in the financial services or health care industries. UCLA, which obviously stores a huge amount of sensitive and personal data, likely wasnt using a leak prevention tool, to the detriment of its students, faculty and staff. Thats too bad. But while data losses such as those reported at UCLA, the VA and ChoicePoint make headlines, its safe to say detection systems have prevented far greater losses. And this points to a huge challenge for IT managers responsible for security-how to accurately state what malicious activity has been prevented through prudent action. And this is where the art-or perhaps politics-of detection comes into play. To demonstrate that detection works, reports must be created and used in such a way that non-IT managers can use them as well. In some cases, trial use of programs that control user access-such as single sign-on tools-can graphically show failed attempts to access data. Reports generated by trial implementations of leak prevention tools are even better at showing the aversion of nefarious activity. On this front, our recommendation has changed from the one we made in 2001, when we focused on vulnerability assessment and penetration testing. Based on our experience since then, we now recommend that IT managers put authorized use at the center stage of the security architecture and use risk assessment to determine how to protect valuable data and systems. IT managers must know what is authorized and acceptable use of data and systems and detect all activity that falls outside these bounds. In fact, nearly all of the security products eWEEK Labs tests-especially those that make the breathless claim of being able to stop zero-day attacks with no prior knowledge of the exploited vulnerability-operate by setting a base line of known good behavior. However, detecting out-of-bounds data and system use isnt enough to keep security systems in the good graces of upper management. Inflexible systems that cant accommodate, say, traffic spikes at the end of the month or the easy addition of new application traffic are barriers to productivity. To this end, we see products that allow IT technicians associated with business units to create policies as being an essential part of implementing detection tools. Our tests of leak detection tools, in particular, have focused on the ability to let authorized users make changes to monitoring functions. IT managers evaluating these types of tools should put this functionality atop their list of essential features. Indeed, when it comes to detection, there is no substitute for an intimate knowledge of what traffic should be traveling the organizations network and what data and transactions are needed to carry on business. Although vendors of detection tools often emphasize the simplicity of installation and integration of their products into the network, the simple fact is that unless a human being evaluates the traffic and usage patterns revealed by these tools, malicious activity can go undetected. Finally, change management procedures can go a long way toward reducing the false-positive readings often associated with detection tools. Best practices: Detection
  • Guard data: Value, at great cost, the personal data collected by the organization.
  • Limit information: Work with line-of-business managers to minimize the amount of regulated data captured and stored about any person associated with the organization.
  • Create usage polices: Insist on clearly worded, thoroughly documented usage policies for personal data and intellectual property; business policies are the cornerstone of effective technology implementations of any security and detection system.
  • Document data: Know, in great detail, the applications, traffic and patterns of use of all data on the network.
  • Manage change: Work with line-of-business managers to understand any changes in normal use that could be wrongly detected as malicious activity; build change management procedures that enable the addition and modification of applications and devices without wrongly setting off detection alerts.
  • Assess risk: Assess business risk to each component of the IT infrastructure so that detection and security spending is focused on the most important assests. Next Page: Step 4: Response



  •  
     
     
     
    Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developersÔÇÖ technical requirements on the companyÔÇÖs evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter companyÔÇÖs first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.
     
     
     
     
     
     
     

    Submit a Comment

    Loading Comments...
     
    Manage your Newsletters: Login   Register My Newsletters























     
     
     
     
     
     
     
     
     
     
     
    Thanks for your registration, follow us on our social networks to keep up-to-date
    Rocket Fuel