Network IPS Isnt a This-Generation Technology

 
 
By Larry Seltzer  |  Posted 2006-07-05 Email Print this article Print
 
 
 
 
 
 
 

Opinion: Are you running a network-based IPS? Do you run it in-line? Do you actually have blocking on? I didn't think so.

The network perimeter at the typical enterprise is getting to be a crowded place. Youve got firewalls and VPN concentrators at the very outside. Youve got intrusion detection and prevention, virus scanners, e-mail security devices, and boxes to do network access control and content filtering. Of course you might have outsourced some services, such as e-mail hygiene. But others really need to be done at your actual network perimeter.

And all these boxes run in-line, creating a tenuous physical architecture, potential points of failure and multiple potential performance bottlenecks, but, as Arlo Guthrie said, thats not what I came here to tell you about.

I came to talk about the toughest job in the whole perimeter: the intrusion prevention system, or IPS. It turns out that the IPS not only has the most difficult job performed at the network perimeter, its not generally taken seriously.

Click here to read eWEEK Labs sample RFP for IPS implementation.

The IPS at the network perimeter evolved out of the IDS (intrusion detection system), which scans network traffic looking for signs of attack and reports them. IDSes developed the reputation of bombarding administrators with reports, very few of which really needed to be dealt with.

IPSes, like IDSes, are driven by attack signatures for which they scan the traffic. An IPS goes a step further: It not only scans for attacks, but also attempts to block them. Theres another type of IPS—a host-based IPS—which runs on a computer and attempts to block attacks aimed at that specific computer.

Unsurprisingly, network IPSes suffer from all the problems of network IDSes: Depending on how you tune them, they are prone to false positives and have the potential to slow all network traffic. Host-based IPSes have a significant advantage: Since they run on the computer theyre protecting, they have the ability to monitor the state of that system and the context of the attack.

Both network and host-based IDSes have the ability to detect specific attacks that have a specific signature. Both can, to some degree, detect some attacks generically, such as stack-based buffer overflows. But the host-based IPS can look at the state of registers and processes into which that potential buffer overflow is traveling. It has more information from which to make intelligent decisions.

Its theoretically possible to make a network IPS that tracks context in the same way for each of the systems its monitoring. Imagine the load on such a system, and imagine having to run it in-line.

This is the trade-off that dooms the IPS: In order to monitor enough context to make an intelligent decision about attacks, an IPS would have to consume resources and time such that it would be a cost and performance burden. If you tune it down to the point where it doesnt impose such a burden, it wont have context and youll miss attacks, or youll get false positives, or possibly both.

This is why many administrators turn off blocking, effectively turning their IPS devices into IDS devices. Maybe they have the idea that theyll forensically examine the logs over time and see if theyre trustworthy enough to turn on blocking. Do you really want to spend time on this or would you rather go to the ball game? Youll find me in section 205.

A network-based IPS makes for great theory, but I wouldnt trust one that wasnt severely constrained to well-proven detections. Its just too hard a job for an in-line device.

Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983. Check out eWEEK.coms for the latest security news, reviews and analysis. And for insights on security coverage around the Web, take a look at eWEEK.com Security Center Editor Larry Seltzers Weblog. More from Larry Seltzer
 
 
 
 
Larry Seltzer has been writing software for and English about computers ever since—,much to his own amazement—,he graduated from the University of Pennsylvania in 1983.

He was one of the authors of NPL and NPL-R, fourth-generation languages for microcomputers by the now-defunct DeskTop Software Corporation. (Larry is sad to find absolutely no hits on any of these +products on Google.) His work at Desktop Software included programming the UCSD p-System, a virtual machine-based operating system with portable binaries that pre-dated Java by more than 10 years.

For several years, he wrote corporate software for Mathematica Policy Research (they're still in business!) and Chase Econometrics (not so lucky) before being forcibly thrown into the consulting market. He bummed around the Philadelphia consulting and contract-programming scenes for a year or two before taking a job at NSTL (National Software Testing Labs) developing product tests and managing contract testing for the computer industry, governments and publication.

In 1991 Larry moved to Massachusetts to become Technical Director of PC Week Labs (now eWeek Labs). He moved within Ziff Davis to New York in 1994 to run testing at Windows Sources. In 1995, he became Technical Director for Internet product testing at PC Magazine and stayed there till 1998.

Since then, he has been writing for numerous other publications, including Fortune Small Business, Windows 2000 Magazine (now Windows and .NET Magazine), ZDNet and Sam Whitmore's Media Survey.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel