Internet engineers have known for at least 13 years how to stop major distributed denial of service attacks. But thanks to a combination of economics and inertia, attacks continue. Here's why.
Nearly 13 years ago, the wizardly band of engineers who invented and continue to defend the Internet published a prescient document they called BCP38, which described ways to thwart the most common forms of distributed denial-of-service attack.
BCP38, short for Best Current Practice #38, was published soon after debilitating denial of service attacks crippled eBay, Amazon, Yahoo, and other major sites in February 2000. If those guidelines to stop malcontents from forging Internet addresses had been widely adopted by the companies, universities, and government agencies that operate the modern Internet, this week's electronic onslaught targeting Spamhaus would have been prevented.
But they weren't. So a 300-gigabit-per-second torrent of traffic flooded into the networks of companies including Spamhaus, Cloudflare, and key Internet switching stations in Amsterdam, Frankfurt, and London. It was like 1,000 cars trying to crowd onto a highway designed for 100 vehicles at a time. Cloudflare dubbed it, perhaps a bit too dramatically, the attack "that almost broke the Internet."
BCP38 outlined how providers can detect and then ignore the kind of forged Internet addresses that were used in this week's DDoS attack. Since its publication, though, adoption has been haphazard. Hardware generally needs to be upgraded. Employees and customers need to be trained. Routers definitely need to be reconfigured. The cost for most providers, in other words, has exceeded the benefits.
"There's an asymmetric cost-benefit here," said Paul Vixie, an engineer and Internet pioneer who serves on the Internet Corporation for Assigned Names and Numbers' security advisory board. That's because, Vixie said, the provider that takes the time to secure its networks makes all the investment, while other providers "get all the reward."
BCP38 is designed to verify that someone claiming to be located in a certain corner of the Internet actually is there. It's a little like a rule that the Postal Service might impose if there's a deluge of junk mail with fake return addresses originating from a particular ZIP code. If you're sending a letter from San Francisco, the new rule might say, your return label needs to sport a valid northern California address, not one falsely purporting to originate in Hong Kong or Paris. It might annoy the occasional tourist, but it would probably work in most cases.
This week's anti-Spamhaus onslaught relied on attackers spoofing Internet addresses, then exploiting a feature of the domain name system (DNS) called open recursors or open recursive resolvers. Because of a quirk in the design of one of the Internet's workhorse protocols, these can amplify traffic over 30 times and overwhelm all but the best-defended targets.
Preventing spoofing through BCP38 will prevent this type of amplification attack. "There is no way to exploit DNS servers of any type, including open recursors, to attack any third party without the ability to spoof traffic," said Arbor Networks' Roland Dobbins. "The ability to spoof traffic is what makes the attack possible. Take away the ability to spoof traffic, and DNS servers may no longer be abused to send floods of traffic to DDoS third parties."
Other countermeasures exist. One of them is to lock down open recursive resolvers by allowing them to be used only by authorized users. There are about 27 million DNS resolvers on the global Internet. Of those, a full 25 million "pose a significant threat" and need to be reconfigured, according to a survey conducted by the Open Resolver Project. Reprogramming them all is the very definition of a non-trivial task.
"You could stop this attack in either of two ways," said Matthew Prince, co-founder and CEO of CloudFlare, which helped defend against this week's attack. "One, shut down the open resolvers, or two, get all the networks to implement BCP38. The attackers need both in order to generate this volume of attack traffic."
Alternatively, networks don't need to lock down open resolvers completely. Google, which operates one of the world's largest networks, has adopted an innovative rate-limiting technique. It describes rate-limiting as a way to "protect any other systems against amplification and traditional distributed DoS (botnet) attacks that could be launched from our resolver servers."
But few companies, universities, individuals, and assorted network operators are going to be as security-conscious as Mountain View's teams of very savvy engineers. Worse yet, even if open recursive resolvers are closed to the public, attackers can switch to other services that rely on UDP, the Internet's User Datagram Protocol. Network management protocols and time-synchronization protocols -- all designed for a simpler, more innocent era -- can also be pressed into service as destructive traffic reflectors.
The reflection ratios may not be as high as 1:30, but they're still enough to interest someone with malicious intent. Arbor Networks has spotted attacks based on traffic amplification from SNMP, a network management protocol, that exceed 30 gigabits per second. Closing open DNS resolvers won't affect attacks that use SNMP to club unwitting targets.
Which is, perhaps, the best argument for BCP38. The most common way to curb spoofing under BCP38 is with a technique called Unicast Reverse Path Forwarding (uRPF) to try to weed out unwanted traffic. But that needs to be extended to nearly every customer of a provider or network operator, a daunting undertaking.
Nick Hilliard, chief technology officer for INEX, an Internet exchange based in Dublin, Ireland, said:
BCP38 is harder than it looks because in order to implement it properly, you need to roll out uRPF or interface [access control lists] to every single customer edge point on the internet. I.e. every DSL link, every cable modem, every VPS in a provider's cloud hosting centre and so forth. The scale of this undertaking is huge: there is lots of older (and sometimes new) equipment in production out there which either doesn't support uRPF (in which case you can usually write access filters to compensate), or which supports uRPF but badly (i.e. the kit might support it for IPv4 but not IPv6). If you're a network operator and you can't use uRPF because your kit won't support it, installing and maintaining individual access filtering on your customer edge is impossible without good automated tools to do so, and many service providers don't have these.
Translation: It all adds up to being really hard.
Vixie, who wrote an easy-to-read description of the problem back in 2002, suggested it's a little like fire, building, and safety codes: the government "usually takes a role" forcing everyone to adopt the same standards, and roughly the same costs. Eventually, he suggests, nobody complains that their competitors are getting away without paying compliance costs.
That argument crops up frequently enough in technical circles, but it tends to be shot down just as fast. For one thing, wielding a botnet to carry out a DDoS attack is already illegal in the United States and just about everywhere else in the civilized world. And as a practical matter, botnet-managing criminals can change their tactics faster than a phalanx of professional bureaucrats in Washington, D.C. or other national capitals can respond.
INEX's Hilliard said the real answer is to change the economics to make it less profitable to carry out DDoS attacks.
When sending spam was cheap, Hilliard said, he was receiving 10,000 Viagra offers a month. But after network providers took concerted steps to crack down, "the economics changed and so did the people who were abusing the Internet, and now I get about 2,000 a month, all of which end up in my spam folder," he said. "The same thing will happen to DDoS attacks: in 10 years' time, we will have a lot more in terms of BCP38 coverage, and we won't get upset as much about the small but steady stream of 300-gigabit attacks."