The Anti-Malware Industry Tries to Save Itself

 
 
By Larry Seltzer  |  Posted 2008-02-11 Print this article Print
 
 
 
 
 
 
 

Testing anti-malware products is hard enough with the old model of detection. We need new tests for the new models that we all know must come.

The declining practicality of the anti-malware business is an old story; one day it's got to collapse under the weight of its technical model. Can it be saved in time?

During the week of Feb. 4, several vendors and testing organizations announced the formation of the AMTSO (Anti-Malware Testing Standards Organization), which is dedicated to developing common standards for testing such products.

It's true that testing standards are very important and a much more difficult problem than people not in the business would think. I've been involved in software testing for many years, and I've done some anti-malware testing. Like most testing, there are a number of variables you could adjust for.

In a paper presented (with Maik Morgenstern) last year at the AVAR Conference in Seoul, Andreas Marx of AV-Test discussed some of these testing variables for a proposal to test "dynamic detection." Consider some of these:

  • Should hardware be new, old or "typical"?
  • What operating system and version should be tested? Always the latest service pack? Always the latest patches?
  • Should products always be tested with default settings? Tuning up to "aggressive" would also be interesting.
  • How much should virtual machines be used?
  • Is removal important, or just detection?

Many of these variables impact on repeatability of the testing, usually a factor considered non-negotiable. But as a practical matter, both platforms and malware are changing so rapidly that a month from now you'll be busy testing new malware on new OS versions and won't have time to test on old software.

I could put up a good argument-one that I believe in-on either side of any of these questions, and there are plenty more where they came from. Try this on for size: How about judgment standards? Should the testing just put out numbers or state, as some AV testing organizations have done historically, that some results are satisfactory and others aren't. Is 90 percent detection acceptable? What about 99 percent? Or 99.99 percent?

With respect to dynamic detection, these really are fair questions. A 99.99 percent detection rate might be unacceptable, depending on your point of view. As Marx points out, the number of unique malware samples AV-Test received went from about 330,000 in 2005 to 872,000 in 2006 to almost 5.5 million in 2007-99.99 percent of that number leaves 54,900 samples undetected.

Testing static malware has been hard enough, but it's mostly an automation exercise, one at which Marx is a world-class master. A test recently compared 24 products on over a million malware samples. This was just to test scanning efficacy; there are so many other things you'd be interested in, such as the speed of testing and (a related factor) the background load it places on the system.

As the number of samples continues to escalate, the current approach of developing signatures and releasing them rapidly will have to fail. The only approach is, as Marx discusses in his paper, detecting malware more generally, looking for malicious behavior. It's hardly a new idea, it's just never been done well enough that you could rely on it, absent signatures. And it won't be good enough until it's that good.

How will we know when it's that good? We'll need good tests, so the AMTSO is a necessary step in the evolution of the industry. The harder steps come next.

Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983.

For insights on security coverage around the Web, take a look at eWEEK.com Security Center Editor Larry Seltzer's blog Cheap Hack

 
 
 
 
Larry Seltzer has been writing software for and English about computers ever since—,much to his own amazement—,he graduated from the University of Pennsylvania in 1983.

He was one of the authors of NPL and NPL-R, fourth-generation languages for microcomputers by the now-defunct DeskTop Software Corporation. (Larry is sad to find absolutely no hits on any of these +products on Google.) His work at Desktop Software included programming the UCSD p-System, a virtual machine-based operating system with portable binaries that pre-dated Java by more than 10 years.

For several years, he wrote corporate software for Mathematica Policy Research (they're still in business!) and Chase Econometrics (not so lucky) before being forcibly thrown into the consulting market. He bummed around the Philadelphia consulting and contract-programming scenes for a year or two before taking a job at NSTL (National Software Testing Labs) developing product tests and managing contract testing for the computer industry, governments and publication.

In 1991 Larry moved to Massachusetts to become Technical Director of PC Week Labs (now eWeek Labs). He moved within Ziff Davis to New York in 1994 to run testing at Windows Sources. In 1995, he became Technical Director for Internet product testing at PC Magazine and stayed there till 1998.

Since then, he has been writing for numerous other publications, including Fortune Small Business, Windows 2000 Magazine (now Windows and .NET Magazine), ZDNet and Sam Whitmore's Media Survey.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel