Even with all the mistakes that users make and all the effort put up by criminals, you might wonder how the networks of illicit software stay up. There are lots of people trying to take them down, and often they are capable people, often with authority. The answer is that botnets have defense mechanisms built in, mechanisms that are often analogous to techniques used by legitimate networks.
In the illicit world we call these “fast flux” networks. A number of characteristics define this type of network and why it’s so hard to take down:
- The entry point to the network is a domain. When accessing the domain different users are presented with a wide collection of responding systems, each a different bot in a botnet.
- The systems in the network have multiple IP addresses from multiple ISPs and exist on multiple physical networks, probably all over the world.
- Nodes on the network monitor the up times of other nodes to determine who has been shut down.
- The DNS entries for the network have very low TTLs (this is the “time to live” value; a low value means that the entries won’t be long-cached and the servers will be rechecked frequently)
- Extensive use is made of proxy servers. Users rarely if ever see actual host systems, but instead are served by a wide collection of proxies.
- The NS (name server) entries in the registration themselves get fluxed.
- The whole network is self-contained; the hosts, the proxies, the DNS servers, all run on the botnet.
The point of all of this is to make the network at once difficult to identify as a whole, and impossible to take down. Well, almost impossible. The one weak spot in a fast flux network is the domain name. Take it down and the network still exists, but all the links pointing it to don’t. New links need to be sent out, and perhaps multiple domains are already pointing to the network so it’s not completely down. Still, the best way to take down fast flux networks is to improve the speed with which their domains may be taken down.
About a year ago ICANN’s GNSO Council established a working group to study fast flux hosting and that group has released its first report on the subject. Like most ICANN reports it’s not fun reading. It uses page after page to explain the blindingly obvious and thoroughly employs ICANN’s language of thick bureaucratese. The report indulges a few crackpot opinions. Nevertheless, there is some good stuff in here. It’s possible some real progress could come of it, although such changes are likely to take a long time. The working group has some well-known and sincere people on it, including Jose Nazario of Arbor Networks, Steve Crocker and Wendy Seltzer (no relation).
I was, at first, confused by the analogies the report draws between fast flux networks and legitimate networks, but there is something to it in a very abstract way: both use proxy servers extensively for security and performance. Both use multiple response hosts (in legit networks it’s called “DNS round robin” and other names). Even low TTLs, thought by some the signature characteristic of Fast Flux, have some legitimate use; I’ve used them myself while transitioning systems from one network to another, in order to minimize downtime. In fact, a fast flux network has a lot in common with a content distribution network such as Akamai’s.
But of course, the similarities are only interesting, they aren’t exculpatory. Akamai pays a lot of money to build and maintain its network and protect it from the likes of fast flux networkers. Fast flux networks are built surreptitiously on the computers of unwilling users who aren’t compensated for turning their computers into a pawn in a criminal enterprise. I wish the report didn’t spend so much time obsessing over these academic similarities. When you find one of these networks it’s not hard to see what’s going on, especially since all the servers on it are running on consumer ISP clients.
I’m also confused and offended by the concerns of what appear to be a minority in the working group that fast flux networks could be used by political dissidents to hide their free speech activities. I’m all for facilitating free speech all over the world, but there’s no need to steal the use of others’ computers to do so.
Prolonging the Attack
At least the report explicitly recognized the heart of the purpose of fast flux for illicit purposes: It prolongs the life of an attack. The report cites a paper by Tyler Moore and Richard Clayton of Cambridge as measuring that fast flux attacks last at least twice as long as non-flux attacks.
ICANN’s work in this is hardly the first attempt to study fast flux networking or how to stop it. The ubiquitous Gadi Evron started a conversation on the subject three years ago (work that was not credited in the ICANN report-for shame, for shame…). I was in on the discussions then and it was clear that the main obstacle in taking down such networks was lazy and/or complicit domain name registrars, although many registrars were and still are responsive to responsible reports of abuse from responsible agencies. Organizations Evron was involved with had success in taking down some networks, not so much others. The ICANN report states that “[N]o registrar has been prosecuted for facilitating criminal activities related to fast flux domains, but there have been reports linking one ICANN-accredited registrar to a large number of fraudulent domains including fast flux domains.” I’m not at all surprised.
My own guess is that the best way to do this is at the domain level, and therefore faster response is required at the registrar level. ICANN has deaccredited a registrar or two recently for gross abuse, but in the main they have been indulgent of registrars and only reacted after problems have festered for years. As one observer noted, generously I think, in the public comments to the ICANN report:
“The report may say that registrars and resellers only “have the appearance of facilitation of fast flux domain attacks”, but the fact is that they have created an environment that invites abuse. They too often simply do not maintain staff and policies adequate to prevent even the most blatant abuses from taking place.“
Personally, I think it’s worse than this. I know from personal experience that some registrars ignore clear evidence of abuse unless they’re forced to react.
Absent any crackdown on registrars, it’s worth noting that the function of quick take-downs could be performed effectively at the registry level. I’ve always like this approach because it’s so efficient, but there doesn’t seem to be a lot of stomach for it. Ideally you’d only want to have a registry take down a domain when the registrar, the company with whom the registrant has a relationship, is unresponsive. If they’re that unresponsive to a clear policy process (none of which exists yet, of course) then things are bad and they deserve serious scrutiny.
I asked Gadi Evron about all this again and he reminded me that there are responsible registrars and registries out there: “I am pleased with ICANN’s continuing work on this subject, which I’ve had the pleasure to help initiate with Steve Crocker a couple of years ago. While their progress is essential, the part of the [registrar] industry which sees the need has not been waiting for consensus, and takes care of these issues under their own authority.” Unfortunately, one bad, unresponsive registrar can do a lot of damage.
The working group does list “accelerated domain suspension processing in collaboration with certified investigators/responders” as one of the possible ways to work on the problem. Staying conservative about things, as ICANN is often inclined to do, this is the best we could hope for. And if there are teeth in the policy to enforce these rules it could make a practical difference. This is what we were talking about three years ago with Gadi Evron’s group. But this approach was not the conclusion of the group; we’re still too early in the ICANN process to go that far. It’s just one of the proposed reactions. The “Interim Conclusions” of the report are (unsurprisingly) that more study is needed. That’s something that anyone can say if they don’t think that hardened networks of malicious systems are an urgent problem.
Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983.
For insights on security coverage around the Web, take a look at eWEEK.com Security Center Editor Larry Seltzer’s blog Cheap Hack.