Even with all the mistakes that users make and all the effort put up by criminals, you might wonder how the networks of illicit software stay up. There are lots of people trying to take them down, and often they are capable people, often with authority. The answer is that botnets have defense mechanisms built in, mechanisms that are often analogous to techniques used by legitimate networks.
In the illicit world we call these "fast flux" networks. A number of characteristics define this type of network and why it's so hard to take down:
- The entry point to the network is a domain. When accessing the domain different users are presented with a wide collection of responding systems, each a different bot in a botnet.
- The systems in the network have multiple IP addresses from multiple ISPs and exist on multiple physical networks, probably all over the world.
- Nodes on the network monitor the up times of other nodes to determine who has been shut down.
- The DNS entries for the network have very low TTLs (this is the "time to live" value; a low value means that the entries won't be long-cached and the servers will be rechecked frequently)
- Extensive use is made of proxy servers. Users rarely if ever see actual host systems, but instead are served by a wide collection of proxies.
- The NS (name server) entries in the registration themselves get fluxed.
- The whole network is self-contained; the hosts, the proxies, the DNS servers, all run on the botnet.
About a year ago ICANN's GNSO Council established a working group to study fast flux hosting and that group has released its first report on the subject. Like most ICANN reports it's not fun reading. It uses page after page to explain the blindingly obvious and thoroughly employs ICANN's language of thick bureaucratese. The report indulges a few crackpot opinions. Nevertheless, there is some good stuff in here. It's possible some real progress could come of it, although such changes are likely to take a long time. The working group has some well-known and sincere people on it, including Jose Nazario of Arbor Networks, Steve Crocker and Wendy Seltzer (no relation).
I was, at first, confused by the analogies the report draws between fast flux networks and legitimate networks, but there is something to it in a very abstract way: both use proxy servers extensively for security and performance. Both use multiple response hosts (in legit networks it's called "DNS round robin" and other names). Even low TTLs, thought by some the signature characteristic of Fast Flux, have some legitimate use; I've used them myself while transitioning systems from one network to another, in order to minimize downtime. In fact, a fast flux network has a lot in common with a content distribution network such as Akamai's.
But of course, the similarities are only interesting, they aren't exculpatory. Akamai pays a lot of money to build and maintain its network and protect it from the likes of fast flux networkers. Fast flux networks are built surreptitiously on the computers of unwilling users who aren't compensated for turning their computers into a pawn in a criminal enterprise. I wish the report didn't spend so much time obsessing over these academic similarities. When you find one of these networks it's not hard to see what's going on, especially since all the servers on it are running on consumer ISP clients.
I'm also confused and offended by the concerns of what appear to be a minority in the working group that fast flux networks could be used by political dissidents to hide their free speech activities. I'm all for facilitating free speech all over the world, but there's no need to steal the use of others' computers to do so.