How Do We Stop Fast Flux Networks?
Botnets change quickly to avoid being taken down. Larry Seltzer assesses a move to combat them – and says more could be done
Even with all the mistakes that users make and all the effort put up by criminals, you might wonder how the networks of illicit software stay up. There are lots of people trying to take them down, and often they are capable people, often with authority. The answer is that botnets have defense mechanisms built in, mechanisms that are often analogous to techniques used by legitimate networks.
In the illicit world we call these “fast flux” networks. A number of characteristics define this type of network and why it’s so hard to take down:
- The entry point to the network is a domain. When accessing the domain different users are presented with a wide collection of responding systems, each a different bot in a botnet.
- The systems in the network have multiple IP addresses from multiple ISPs and exist on multiple physical networks, probably all over the world.
- Nodes on the network monitor the up times of other nodes to determine who has been shut down.
- The DNS entries for the network have very low TTLs (this is the “time to live” value; a low value means that the entries won’t be long-cached and the servers will be rechecked frequently)
- Extensive use is made of proxy servers. Users rarely if ever see actual host systems, but instead are served by a wide collection of proxies.
- The NS (name server) entries in the registration themselves get fluxed.
- The whole network is self-contained; the hosts, the proxies, the DNS servers, all run on the botnet.
The point of all of this is to make the network at once difficult to identify as a whole, and impossible to take down. Well, almost impossible. The one weak spot in a fast flux network is the domain name. Take it down and the network still exists, but all the links pointing it to don’t. New links need to be sent out, and perhaps multiple domains are already pointing to the network so it’s not completely down. Still, the best way to take down fast flux networks is to improve the speed with which their domains may be taken down.
About a year ago ICANN’s GNSO Council established a working group to study fast flux hosting and that group has released its first report on the subject. Like most ICANN reports it’s not fun reading. It uses page after page to explain the blindingly obvious and thoroughly employs ICANN’s language of thick bureaucratese. The report indulges a few crackpot opinions. Nevertheless, there is some good stuff in here. It’s possible some real progress could come of it, although such changes are likely to take a long time. The working group has some well-known and sincere people on it, including Jose Nazario of Arbor Networks, Steve Crocker and Wendy Seltzer (no relation).
I was, at first, confused by the analogies the report draws between fast flux networks and legitimate networks, but there is something to it in a very abstract way: both use proxy servers extensively for security and performance. Both use multiple response hosts (in legit networks it’s called “DNS round robin” and other names). Even low TTLs, thought by some the signature characteristic of Fast Flux, have some legitimate use; I’ve used them myself while transitioning systems from one network to another, in order to minimise downtime. In fact, a fast flux network has a lot in common with a content distribution network such as Akamai’s.