The weaknesses of conventional anti-virus are well-known: Its mostly a reactive approach, looking for problems after theyve already been identified. Threats which havent already been found—”zero-day attacks”—either have to be identified through more generic threat detection techniques or slip through undetected.
The generic detections, also known as heuristics, are prone to false positives. Kaspersky Anti-Virus, for example, frequently identifies real e-mails from Bank of America to me as Trojan-Spy.HTML.Fraud.gen. Ive seen false positives on real executable programs too, although its pretty rare from good AV.
Respected kernel researcher Joanna Rutkowska recently blogged on the subject, saying that the signature/heuristics model was a strategic mistake.
“This is an example of how the security industry took a wrong path, the path that never could lead to an effective and elegant solution,” she wrote.
But every now and then I get a pitch from a vendor or a note from a reader proposing a whitelist approach. Securewaves “Positive Model” approach is a good example, as is Bit9 Parity. In both cases the idea is to specify which programs can run on the system and disallow anything else.
This sure is a tempting approach, and at least some form of it is surely a good idea on all managed networks. Why should IT in a business allow anything other than approved programs to run on the system? But the idea that this will prevent malware from running on the system in all contexts is wishful thinking, and I think its impractical to implement such systems for homes and very small businesses where there is no experienced administrator with authority over system policies.
A related technology that does good in this regard, but falls short of perfection, is the digital signature. Microsoft recently took a lot of guff for blocking a device driver that allowed other drivers to elude their requirement on 64-bit Windows Vista that all drivers be digitally signed and that the signature be issued by a trusted certificate authority.
Microsoft wasnt the first to require digital signatures, although it often seems that way from the claims of those with a “blame Microsoft first” attitude. Java applets, for example, need to be signed in order to perform operations outside of the sandbox, to interact with the file system for instance,. For a good example of this behavior, try the Secunia Software Inspector, an applet that traverses your file system reporting old and vulnerable applications.
Next page: Is there a solution?
Is there a solution
?”>
Such signatures arent really a whitelist, but they are meant to enforce accountability, which is related. For instance, one could set a rule whitelisting certain vendors and thereby allow any code signed with their keys. And as Rutkowska says, one class of largely obsolete malware, file infectors, are defeated by a well-implemented system of code signatures. A whitelist system could also be implemented by having the administrator use a company key to sign only approved programs. Im sure this is basically how some of the commercial approaches work.
Theres so much software out there how can anyone know whats trustworthy? We currently employ the AV companies to make these decisions for us with their reactive approach, but how about taking a page out of the world of e-mail protection (admittedly, not the most successful bunch of technologists, but stick with me for a moment) and implement a reputation system.
Heres how it could work: All code has to be signed, or at least it needs to be in order to be trusted. Third party reputation systems keep databases of companies and their code signing public keys. They do a double-check on the checks supposedly performed by certificate authorities and take reports of abuse, feeding them back into the reputation report. When a program is installed, the public key is checked for reputation. If the signer of a new program being installed has no reputation or the program is not signed it is deserving of a high level of suspicion; perhaps this is when you turn on the heuristic scanner with the paranoia level set to “Maximum.”
Periodically, the system could also check for changes in the reputations of signers of installed software and report these to the user or administrator. This is the kind of system that existing AV vendors could be in a position to implement. The real problem is the infrequent use of digital signatures in the programming community.
Whitelists and signatures cant stop a buffer overflow in an approved program from executing malware passed to it. That system is just as compromised. And while it would be trickier for that attack to persist on the system, its hardly impossible. So Im skeptical of the broad brush Rutkowska uses to paint signature-based AV as a historic mistake. It was expedient at a time when elegant solutions were unavailable. In fact, they still are.
Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983.
More from Larry Seltzer