BlackHat USA 2012

As those of you still reading have probably noticed, I took a rather long hiatus from blogging. However, since my last published posts were a recap of BlackHat and DefCon in 2011, this seems like a great place to start up again! So, without further ado, a trip report:

This year I’ve decided to make a departure from the talk-by-talk trip reports I’ve done in the past. Most of the interesting presentations are already online (the whitepapers and slide decks, at least) and I’ll link to them here, but overall this was a very interesting year in information security and I think the gestalt and the keynotes are more important than the specific exploits demonstrated.

BlackHat has changed from what it was five years ago. The criticism that it’s turned into a “vendor dog-and-pony show,” while harsh (it’s still worlds better than the RSAConference ) has some truth to it — the security presented at BlackHat these days is mostly the kind that comes in a box and slides into a server rack. However, the reputation and importance the conference has long had still draws some interesting speakers, who will often put off revelations for weeks or months to be able to reveal them at what is still the world’s #1 professional security conference. Nevertheless, as someone whose occupation is in secure design, architecture, and development, without some significant changes I think that the time of my attending BlackHat every year is coming to a close. (Also, to RSA, McAfee, and a few other vendor offenders: “booth babes” are really tacky at a professional conference. BlackHat isn’t an auto show, it offends a fair number of attendees, and it strains credulity to imagine that anyone purchases, say, cryptography hardware based on the hot girl at your booth who couldn’t actually answer any questions about your product. Considering as in the past three years I’ve not seen this marketing practice responded to with anything but ridicule I’m kind of amazed you keep it up.)

DT (The Dark Tangent, Jeff Moss) introduced the conference as always, and this year’s Day 1 keynote was retired FBI Assistant Director Shawn Henry. With his time on the Homeland Security Advisory Council and current role in ICANN’s byzantine and unaccountable bureaucracy, DT often seems to have much “gone native” in government; this was at least the third consecutive year of a government keynote dropping “cyber” into every third sentence. DT’s intro was surprising, though — he brought up the Strikeback firewall from 15 years ago (a programmable firewall that could DoS your attackers, should you happen to feel like programming your corporate network carry out automated felonies) and observed that you can counterstrike attackers with lawsuits, diplomacy, or direct action. A mostly-favorable view of counter-hacking was not a viewpoint I’d heard publicly expressed in years, especially from someone with government connections. Shawn Henry presented a rather militarized view of the information security landscape, going so far as to call computer network attacks “the #1 threat to global security.” Seems hyperbolic to me, but at least it shows they take the problem seriously.

On one hand, Henry showed considerable insight into the scope of the problem — better than we’ve historically seen from government (many previous BlackHat government keynotes have been laughable.) While he claims that the vast majority of hacking & data breaches happen in the classified environment where we never hear about it — a claim I find dubious just due to the sheer difference in scale between the classified environment and the Internet, but can’t wholly rule out either — he recognizes that the “cyber domain” is a great equalizer. A sophisticated organized-crime group or circle of motivated hackers does not differ meaningfully in capability from a state-sponsored actor; while Stuxnet and Flame may have been crafted by governments, they do not differ in sophistication from other advanced malware, and plenty of people outside the classified sphere have access to 0-days. It doesn’t take billions of dollars and government resources to carry out a major electronic attack.

(An aside: while this wasn’t something Henry talked about, one of my biggest concerns for the future is the advancing state of 3D printing, desktop manufacturing, and synthetic biology. You can assemble a synthetic biology lab and genetically engineer organisms in your garage at this point on a budget within the reach of a well-off amateur. While molecular nanotechnology is a long way off still, I have no idea how to deal with a world wherein we have to somehow implement a defense against when fail-once-fail-everywhere existential threats can come from individual nuts anywhere in the world.)

Mr. Henry claims that we have to go from mitigating the vulnerability to mitigating the threat — that is, prevent the attacks from happening in the first place. Just as the FBI had to go from measuring cases and arrests to measuring threat elimination as an international intelligence agency after 9/11, we need to move from trying to set up perimeters to keep attackers off the network to trying to detect and remediate attacks as they happen. We have to assume a breach and plan accordingly. Considering as the progressive decline in the effectiveness of perimeters and the need for distributed defense throughout the network, the enterprise, and the world is the reason behind the name of this blog, that part at least I agree with.

Henry says that the NSA and DHS have the authority and responsibility to protect government and military networks, but no one in government is monitoring the commercial space. This is a contrast from how other countries do things — the United States is unusual in not having an overt industrial espionage program that attempts to advantage local businesses. He extols us — people in the information security sphere — to be proactive in finding out who our adversaries are and sharing that data with the government. Judging by the questions (some of which were amusingly prefixed with “Without using the word ‘cyber’…”) this was not a popular view, for a couple of reasons.

The panel following the keynote (including DT, Jennifer Granick, Bruce Schneier, and Marcus Ranum, all BlackHat alumni from the very first conference) went into these criticisms further. One is of course that “sharing” with the government tends to be a one-way street, which prevents people from viewing them as a partner. The other, however, is that many felt that this is a spectacular abdication of responsibility by the government — Really? We’re supposed to keep the Chinese intelligence community out of our servers? And what are the billions of tax dollars going to the NSA and DHS for, then?

The panel discussion also went into an interesting discussion on what they — as some of the luminaries of information security — believe a CISO should be spending their money on. DT advised them to spend it on their employees, which of course went over swimmingly with this audience. Other advice included to spend on security generalists, not experts in particular tasks or technologies, since narrow expertise is increasingly available via outsourcing, and to focus on detection and response rather than defense and prevention. “The cloud” is going to happen whether CISOs want it to or not, so we have to find a way to have a data-centric model where we know what’s out there and can tell if it’s been tampered with; keeping everything behind walls will not work indefinitely.

A final controversy came up over the issue of government-sponsored hacking like Flame and Stuxnet. DT looked favorably on it, saying that before this there wasn’t really any room between harsh words and dropping 2000-pound bombs; governments having a tool available to them to carry out an attack without killing people is on balance good for the world. Jennifer Granick and Marcus Ranum vehemently disagreed, describing it as a crime against humanity putting civilian infrastructure on the frontlines of a nonexistent war, then telling people to be glad that at least we didn’t blow them up. The world of information security has become the political world; the world has changed such that Internet policy is just policy now.

There was also a keynote interview with author Neal Stephenson, which while entertaining did not really provide any insight so I’m not going to relate it here. If you’re a science fiction fan, though, it’s worth looking up when it inevitably appears on YouTube in a few weeks.

And now, on to the talks. One interesting trend was the recurring theme of attacks on pseudo-random number generators. Dan Kaminsky discussed how PRNG breaks have compromised RSA (about 0.5% of keys were compromised) and Debian OpenSSH, and hardware RNG just isn’t available most of the time. The problem is that our entropy pools are very limited on servers, VMs, the cloud, and embedded devices — we don’t have keyboard or mouse and frequently don’t even have disk or good hardware interrupts. TrueRand (which used an interrupt every 16ms to generate noise) was disavowed by its author but does still work, and Dan presented DakaRand which uses multiple timers and threads to generate noise since multithreaded programming is to a degree nondeterministic (and his algorithm proceeds to SHA-2 hash the noise, use Von Neumann debiasing, Scrypt the results and use AES-256-CTR to turn it all into a stream.) Each call is independent so it’s secure against VM cloning; unfortunately, most developers will just keep calling /dev/urandom.

George Argyros and Aggelos Kiayias proceeded to demonstrate a variety of entropy reduction, seed attacks, and state recovery attacks against random number generators, managing to compromise PHP session cookies and administrative recovery tokens across multiple applications. They went into some detail on the various random implementations on both Linux and Windows and how entropy can be reduced or the seed reverse-engineered; some of them were fascinating (like attacking your own session cookie first to build an application-specific rainbow table.) If you’re up for the crypto math, take a look at their presentation.

Other topics included a resurgence of attacks on NTLM (mostly pass-the-hash and SMB relay variants), using browser exploits to pivot into more traditional infrastructure exploits against routers, the recently-publicized exploits against Windows Vista/Windows 7 gadgets (which were well known and described in Microsoft’s own gadget security whitepaper; in short, it’s easy for a developer to write a vulnerable gadget, so most of them do so), and persistent injection of browser exploits during a MitM session (e.g. on open WiFi.)

However, there was one other talk I’d like to go into some detail on — iSec Partners presented The Myth of Twelve More Bytes around the impact of IPv6, DNSSEC, and the new commercial gTLDs (the top-level domain identifiers like .com and .net) being issued by ICANN. In short, these changes remove artificial scarcity from the Internet in a variety of ways that are not broadly understood, and this is fundamentally changing the architecture of the network because assumptions of scarcity are deeply woven into our current designs.

IPv6 is not just changing IP addresses from 4 bytes to 16. It makes substantial adjustments to the layer 2 through 4 network stacks, removing ARP while modifying TCP, UDP, ICMP, and DHCP and adding new IP protocols like SLAAC (Stateless Address Auto-Configuration.)

ICMP becomes critical infrastructure — you can’t blindly drop it or SLAAC, router discovery, and neighbor discovery stop working. Yet those are unauthenticated protocols, spoofable by anyone on the segment; duplicate address detection is also impractical since it can be spoofed for a DoS. SLAAC eliminates the need for DHCP (though DHCPv6 does exist for procuring additional addresses) by allowing clients to just give themselves an IP with the static SLAAC prefix, ask for a network identifier, and append their own MAC address to get a globally-unique IP. There is also a protocol (RFC4941, Privacy Addresses) for generating additional random addresses since for obvious reasons not everyone wants to have a permanent, immutable, Internet-routeable globally unique IP (it would sure make ad networks’ and trackers’ jobs easier.)

If you’ve not disabled IPv6 entirely on your Windows machines, the fact that you may not be rolling out IPv6 in your network yet may not protect you from IPv6 attacks. SLAAC is the default on Windows, so your Windows machines all have IPv6 addresses already, and the IP stack in Windows 7 and beyond prefers v6 over v4 — your Windows machines are just waiting for something to come along and talk IPv6 to them, and it doesn’t have to be you. Likewise, there are many IPv6-over-IPv4 transition mechanisms, like 6to4, 6rd, ISATAP, and Teredo, and while these can be blocked, you have to do so proactively — they just work otherwise, on your existing hardware and software. Your Windows servers may be speaking to a hacker on the Internet via Teredo right now without your knowledge.

Other interesting implications of IPv6: since unlimited extension headers are allowed, TCP packets may be fragmented before the TCP header — you can’t do stateless filtering on port number! And when it comes to stateful filtering, consider that you can’t keep a full state table (the number of possible source addresses to keep state on is unfathomably large) and attackers can send every packet from a brand new, totally valid IP address — these aren’t spoofed addresses, they can make full connections. Stateful filtering that’s not subject to trivial DoS is going to be a challenge.

The new top-level domain process also causes its share of problems by breaking the Internet’s trust model. And there are going to be a lot of new top-level domains — despite the $185,000 application fee for a top-level domain, Amazon has applied for 76 and Google for 121, and that’s just for “brand” domains they want to reserve entirely for themselves and not run as a registry. (i.e. Google has decided that all “.search” sites should be Google sites; likewise, “.shoes” or “.clothes” domains will all be Amazon’s.) While the process for applying for a gTLD is labyrinthine, there’s no step of the process that tries to judge whether or not issuing a domain is a good idea.

Currently we assume that a trademarked domain likely belongs to its most recognizable holder — paypal.com goes to PayPal. But who does paypal.rugby go to? Paypal.shoes (if it were to exist) goes to Amazon. With 1400 new gTLDs, many of which will be run as public registrars, this assumption no longer holds — “defensive registration” becomes impossible because you can’t even identify all the registries. We assume that IP spoofing is highly limited in full-connection situations, but under IPv6 I don’t need IP spoofing to create unlimited connections. We assume that an individual is highly attached to a few IP addresses (their home, work, proxies, VPNs, etc.) which will no longer hold true. How do we handle browser homeographs when they can exist in 1400 TLDs?

How do we do anti-fraud and adaptive authentication without IP scarcity? What about DDoS prevention, rate limiting, IDS, SIEM, event correlation — even load balancing? How do we do IP reputation? Moving it up to the network level is a problem since one bad actor could DoS their entire network provider. Right now, if you advertise an IPv4 space you don’t own, the people who do own it will notice and complain — but in IPv6, if you own an AS — any AS at all — you can create IP ranges for dedicated tasks, advertise them, then tear them down and leave little evidence they ever existed at all.

For all my complaining about the changing nature of BlackHat (the vendor floor is really the center of the conference now), it certainly brought to my attention some issues that are not yet “common knowledge” in the information security world. We’re already past time to start preparing for these things, as attackers already are.

attacks, crypto, industry, networks, privacy, products, society

If you enjoyed this post, please consider to leave a comment or subscribe to the feed and get future articles delivered to your feed reader.