Liveblogging Hal Roberts, Ethan Zuckerman and Jillian York’s presentation on Distributed Denial of Service Attacks Against Independent Media and Human Rights Sites at the Berkman Center. Please excuse misrepresentation, misinterpretation, typos and general stupidity.
Hal begins by outlining the history of denial of service attacks, which “have been around as long as the Internet.” The rise of botnets allowed for distributed denial of service (DDoS) attacks, in which the attacks are coming from multiple places at the same time. Early botnets were controlled by IRC; these days, many are operated through Twitter accounts.
Ethan points out that we’re seeing a rise in botnets being used to attack each other. One of the largest Internet outages of all time — 9 hours long, in China — was caused by a botnet-fueled “turf war” between two online gaming providers.
(Interesting factoid: early DDoS defense systems grew from the needs of online gambling sites that were being attacked, who operate in a gray area and may not want to ask authorities for help defending against attacks.)
Arbor’s ATLAS, which tracks DDoS attacks worldwide, estimates that 500-1500 attacks happen per day. Hal & Ethan believe that ATLAS “only sees the big ones,” meaning the 500-1500 number is a gross underestimate.
DDoS attacks comprise a wide variety of approaches: slowloris attacks overwhelm machines by slowing down their response rates to requests, while random incessant searches require a server to repeatedly execute database calls, using up all available resources. These two examples are application attacks that essentially “crash the box” (affect a single server). Network attacks that involve volunteers, bots, and/or amplifiers work by “clogging the pipe,” or slowing down the the flow of traffic, for example by requesting huge amounts of data that flood a server.
People who face DDoS attacks have several options. One is to obtain a better machine with a higher capacity to handle requests. Another option is to rent servers online in order to add resources only when they’re needed. Packet filtering can block malicious traffic (assuming it can be identified); scrubbing involves having a data center filter packets for you. Source mitigation and dynamic rerouting are used when the network is flooded. At that point, packet filtering and scrubbing is impractical. Both tactics involve preventing that flood of traffic from arriving, whether by stopping it in its tracks or by sending it somewhere else.
All of these tactics are problematic in some way: they’re expensive (scrubbing can cost $40,000-50,000 per month), they require considerable advance planning or high-level connections, or they’re tricky to execute (the “dark arts” of DDoS defense).
“All of this is background,” Hal says. Their specific research question involves independent media and human rights sites — what kinds of DDoS attacks are used against them, and how often? How can they defend themselves?
Hal describes a “paradox” of DDoS attacks: overall, the defenses are working pretty well. Huge sites — Google, the New York Times, Facebook — are attacked often, but they manage to stay online. This is because these sites are located close to the core of the network, where around 75% of ISPs are able respond to DDoS attacks in less than an hour, making DDoS attacks a “manageable problem.” The sites at the edge of the network are much more vulnerable, and they’re also much more likely to be attacked.
Ethan describes the case of Viet Tan, which is under DDoS attacks almost constantly — to the extent that when they put up a new web service, it is attacked within hours. As a result, Viet Tan has shifted many of their new campaigns to Blogger (blogspot.com) blogs.
Viet Tan is struggling in particular because they’re not only experiencing DDoS attacks. They also face filtering at the national level, from a government who wants to prevent people in Vietnam from accessing their site. Ethan says that 81% of sites in the study that had experienced a DDoS attack have also experienced intrusion, filtering, or another form of attack. In the case of Viet Tan, the site was being attacked unknowingly by its target audience, many of whom were using a corrupted Vietnamese keyboard driver that allowed their computers to be used as part of a botnet attack.
One of the big problems for sites that are DDoS-ed is that their ISPs may jettison them in order to protect other sites on the same server. Of the sites in the study, 55% of sites that were attacked were shut down by their ISP, while only 36% were successfully defended by their ISP.
An attack against Irrawaddy, a Burmese activist site hosted in Thailand, essentially caused all of Thailand to go offline. In response, Irrawaddy’s ISP asked it to move elsewhere. This year, they were attacked again with a larger attack. They were on a stronger ISP that may have been able to protect them, but they hadn’t paid for the necessary level of protection and were again shut down.
Hal and Ethan suggest that a system of social insurance is happening online, at least with larger sites — everything is starting to cost a little bit more, with the extra cost subsidizing the sites that are attacked. The problem with this is that small Internet sites aren’t protected because they’re not in the core.
Hal and Ethan wonder whether someone should build dedicated human rights hosting to protect these sites from attacks. The problem with this is that it collects all these sites into a single location, meaning any company that hosted a group of these sites would be a major target for DDoS attacks. Devising a fair pricing system in this case is tricky.
Ethan raises the issue of intermediary censorship — the constant threat that your hosting company may shut your site down for any reason (e.g., when Amazon shut down Wikileaks). This is a problem of Internet architecture, he says, and there are two solutions: building an alternative, peer-based architecture, or creating a consumer movement that puts sufficient pressure on hosting companies not to take sites down.
What Hal and Ethan ended up recommending to these sites is to have a back-up plan; to minimize dynamic pages; to have robust mirroring, monitoring and failover; to consider hosting on Blogger or a similar large site; and to avoid using the cheapest hosting provider.
Within some communities, Ethan says, a person or group emerges that is the technical contact. This person or group advocates for sites that are under attack. These “tech leaders” are connected to one another and to companies in the core that want to help. The problem is that this isn’t a particularly scaleable model — a better chain needs to be established, so that problems can escalate through a team of local experts up to larger entities. In the meantime, it’s essential to increase organized public pressure on private companies not to act as intermediary censors, but rather to support these sites.
Thanks Rebekah. I’ll follow the link to the full report.
@Mister Wendal — I’m by no means an expert and don’t want to risk giving you an inaccurate answer, so instead I’ll point you to the full report, which should answer your questions.
I just so happen to be preparing to take the CCIE Security lab exam soon and I found this an interesting article, as the whole focus of my exam is practical solutions to defeat every day network and system attacks.
What I would like to ask is from the information in the paragraph ,and I quote.. “This is because these sites are located close to the core of the network, where around 75% of ISPs are able respond to DDoS attacks in less than an hour, making DDoS attacks a “manageable problem.” The sites at the edge of the network are much more vulnerable, and they’re also much more likely to be attacked.”
What exactly was meant by sites located close to the network core ? Do you mean websites hosted on server located at the core of the network versus those on servers at the edge ?