When it comes to defending technology assets against malicious hackers and other bad guys, I’ve always been a firm believer in understanding and even using the tools and tactics of the enemy. In most cases, this means having familiarity and even a working knowledge of the tools and methods that are used to scan and compromise networks and systems. But I’ve also been in favor of more proactive means of protection, from the use of tarpits and honey pots to the use of good worms to seek out and patch systems with holes that could be exploited by attackers and worms. So it should be understandable that I was very, very interested in a paper that was presented at the recent USENIX Symposium. This paper, by several researchers at the University of Washington, advocates the creation and use of friendly botnets in order to slow down and even stop the evil botnets that are used to attack and bring down Web sites and servers. Put very simply, the idea in the paper is that it is possible to take a swarm of friendly computing systems, essentially a botnet but something that the paper calls a phalanx, and place it in front of Web sites and servers. Interactive communication with a site passes through this cluster of systems and data is only passed to the server when the server requests it. Now say an evil botnet attacks the site protected by the phalanx. Instead of the full network wave of the evil botnet crushing the server and bringing it down, most of the traffic would be stopped by the phalanx systems, with only a small amount of traffic reaching the main server. Best of all, even under a massive denial-of -ervice attack, a server protected in this way would stay up and running. But where will these good botnet systems come from? The bad guys use worms and rootkits to take over zombie systems and make them part of their botnets; will the good guys force systems to be part of their good phalanx botnets? Well, actually there are already plenty of computing resources available for use as good phalanx botnets. The giant content delivery networks have vast amounts of computing resources at their disposal, and it would be relatively trivial to repurpose some of this for use as secure good botnets. Also, there are plenty of examples of people voluntarily contributing computing resources to distributed causes, with the popular [email protected] as probably the best known example. As the paper points out, it could even be possible to use things such as BitTorrent to build good phalanx botnets to stop the evil botnets. There are, of course, lots more details to the phalanx approach outlined in the paper, but in the main there is nothing too technologically outrageous about this approach. Realistically, from a purely technological standpoint I could see any one of the major CDNs being able to deploy a defensive phalanx botnet by the end of the year. At which point it could become a lot more difficult for the bad guys to use their botnets as weapons of extortion and intimidation. Sure, security constantly changes and the bad guys will eventually come up with new tactics, but this phalanx idea looks to be a very good approach to protecting servers and Web sites. Like the old saying that it takes a thief to catch a thief, it may take a botnet to stop a botnet.