South Carolina Hack Attack Root Causes

Recently, the South Carolina Department of Revenue was hacked, losing tax records on 3.6 million people — that is, most of South Carolina’s population. These contained Social Security numbers at the very least, as well as 3.3 million bank account numbers, and may have been full tax returns (they haven’t said.)

There’s been the usual casting of blame after such an incident, but it’s quite interesting to read over the incident response report they had Mandiant prepare for them. Despite being “PCI-Compliant”, they had a number of vulnerabilities that let the hackers break in. But what could they really have done to protect themselves? From the report, the attacker went through 16 steps:

1. August 13, 2012: A malicious (phishing) email was sent to multiple Department of Revenue employees. At least one Department of Revenue user clicked on the embedded link, unwittingly executed malware, and became compromised. The malware likely stole the userís username and password. This theory is based on other facts discovered during the investigation; however, Mandiant was unable to conclusively determine if this is how the userís credentials were obtained by the attacker.

It’s not clear here if this was untargeted spam phishing with off-the-shelf malware, or a spear-phishing attack on the DOR with custom malware. If it’s the former, then this would have been prevented by any decent mail security product (to block spam and phishing) and desktop anti-malware software with current signatures & centralized monitoring. Since I would think any “PCI-Compliant” institution would have this, my guess is that this was a spear-phishing attack. The unfortunate fact is that there’s basically nothing you can do about spear-phishing and targeted malware; by its nature it evades automated detection, and security awareness training is of limited effectiveness against a phishing mail customized for your employees. So far there’s no sign that the state DOR screwed up here.

2. August 27, 2012: The attacker logged into the remote access service (Citrix) using legitimate Department of Revenue user credentials. The credentials used belonged to one of the users who had received and opened the malicious email on August 13, 2012. The attacker used the Citrix portal to log into the userís workstation and then leveraged the userís access rights to access other Department of Revenue systems and databases with the userís credentials.

And right here in step 2 I think we’ve found the root cause of the attack. They had an external remote access service that allowed single-factor login — coming in through the perimeter from the Internet using only a password. Given that spear-phishing & targeted malware are not preventable, you have to assume that passwords will be stolen and have barriers in place to keep password-bearing attackers out; two-factor auth on remote access services should be a bare minimum, whether that’s SecurID tokens, smart cards, or other mechanisms.

3. August 29, 2012: The attacker executed utilities designed to obtain user account passwords on six servers.

Dumping the LSA secrets requires administrative privileges. It’s possible the credentials the attacker acquired in step 1 were administrative on some servers, in which case there’s no new exploit here. But if they weren’t, the attacker elevated privileges in some way, implying that the DOR might have had a patch-management problem. Once again, though, it’s not clear that there’s much they can do about it — patching inside of 30-60 days is actually very difficult for an enterprise of decent size, even a mature, technically competent one. If the attacker used a recent exploit, then the DOR might well have been no worse-off patching-wise than everyone else is. On the other hand, if they used something ancient, this might be another problem by the DOR. This said, with proper authentication on the remote access service, the attacker shouldn’t have even gotten this far.

4. September 1, 2012: The attacker executed a utility to obtain user account passwords for all Windows user accounts. The attacker also installed malicious software (ďbackdoorĒ) on one server.

At this point the attacker is a domain administrator; if he’s dumping “all Windows user accounts” he’s got at least a network login on the domain controller. Chances are that a domain admin had logged onto the first compromised server at some point, and thus the attacker captured his cached credentials. No new attacks or exploits here.

5. September 2, 2012: The attacker interacted with twenty one servers using a compromised account and performed reconnaissance activities. The attacker also authenticated to a web server that handled payment maintenance information for the Department of Revenue, but was not able to accomplish anything malicious.
6. September 3, 2012: The attacker interacted with eight servers using a compromised account and performed reconnaissance activities. The attacker again authenticated to a web server that handled payment maintenance information for the Department of Revenue, but was not able to accomplish anything malicious.
7. September 4, 2012: The attacker interacted with six systems using a compromised account and performed reconnaissance activities.
8. September 5 – 10, 2012: No evidence of attacker activity was identified.
9. September 11, 2012: The attacker interacted with three systems using a compromised account and performed reconnaissance activities.

Nothing interesting here. Very few enterprises could have detected the above; it would require the sort of aggressive NIDS with extensive monitoring that’s normally only found in classified environments.

10. September 12, 2012: The attacker copied database backup files to a staging directory.
11. September 13 and 14, 2012: The attacker compressed the database backup files into fourteen (of the fifteen total) encrypted 7-zip archives. The attacker then moved the 7-zip archives from the database server to another server and sent the data to a system on the Internet. The attacker then deleted the backup files and 7-zip archives.

This was a database exfiltration of over 8 gigabytes of data. This is actually one thing that NIDS could be effective against if tuned properly.

The remainder of the attack steps were just some more reconnaissance, backdoor testing, and other probes, followed by Mandiant shutting down the attacker’s entry point.

The interesting thing here is that assuming this was spear-phishing with targeted malware, the only mistakes the DOR seems to have made were insufficient IDS tuning (which is honestly usually high-effort, low-payoff security work) and having single-factor remote access (which is catastrophic.) There’s nothing in this report that makes it look like the DOR’s IT department was run by a gang of idiots (like in, say, last year’s many Sony attacks); it looks like an organization that was doing most things right but had failed to deploy two-factor remote access. I’d wager their IT security guys wanted to, too, but were blocked by either the inconvenience to users or the cost of rolling out tokens or smart cards.

Having spent more than $14 million recovering from this incident, I’d bet two-factor auth is looking pretty cheap now.

attacks, mitigations, risk

BlackHat USA 2012

As those of you still reading have probably noticed, I took a rather long hiatus from blogging. However, since my last published posts were a recap of BlackHat and DefCon in 2011, this seems like a great place to start up again! So, without further ado, a trip report:

This year I’ve decided to make a departure from the talk-by-talk trip reports I’ve done in the past. Most of the interesting presentations are already online (the whitepapers and slide decks, at least) and I’ll link to them here, but overall this was a very interesting year in information security and I think the gestalt and the keynotes are more important than the specific exploits demonstrated.

BlackHat has changed from what it was five years ago. The criticism that it’s turned into a “vendor dog-and-pony show,” while harsh (it’s still worlds better than the RSAConference ) has some truth to it — the security presented at BlackHat these days is mostly the kind that comes in a box and slides into a server rack. However, the reputation and importance the conference has long had still draws some interesting speakers, who will often put off revelations for weeks or months to be able to reveal them at what is still the world’s #1 professional security conference. Nevertheless, as someone whose occupation is in secure design, architecture, and development, without some significant changes I think that the time of my attending BlackHat every year is coming to a close. (Also, to RSA, McAfee, and a few other vendor offenders: “booth babes” are really tacky at a professional conference. BlackHat isn’t an auto show, it offends a fair number of attendees, and it strains credulity to imagine that anyone purchases, say, cryptography hardware based on the hot girl at your booth who couldn’t actually answer any questions about your product. Considering as in the past three years I’ve not seen this marketing practice responded to with anything but ridicule I’m kind of amazed you keep it up.)

DT (The Dark Tangent, Jeff Moss) introduced the conference as always, and this year’s Day 1 keynote was retired FBI Assistant Director Shawn Henry. With his time on the Homeland Security Advisory Council and current role in ICANN’s byzantine and unaccountable bureaucracy, DT often seems to have much “gone native” in government; this was at least the third consecutive year of a government keynote dropping “cyber” into every third sentence. DT’s intro was surprising, though — he brought up the Strikeback firewall from 15 years ago (a programmable firewall that could DoS your attackers, should you happen to feel like programming your corporate network carry out automated felonies) and observed that you can counterstrike attackers with lawsuits, diplomacy, or direct action. A mostly-favorable view of counter-hacking was not a viewpoint I’d heard publicly expressed in years, especially from someone with government connections. Shawn Henry presented a rather militarized view of the information security landscape, going so far as to call computer network attacks “the #1 threat to global security.” Seems hyperbolic to me, but at least it shows they take the problem seriously.

On one hand, Henry showed considerable insight into the scope of the problem — better than weíve historically seen from government (many previous BlackHat government keynotes have been laughable.) While he claims that the vast majority of hacking & data breaches happen in the classified environment where we never hear about it — a claim I find dubious just due to the sheer difference in scale between the classified environment and the Internet, but can’t wholly rule out either — he recognizes that the “cyber domain” is a great equalizer. A sophisticated organized-crime group or circle of motivated hackers does not differ meaningfully in capability from a state-sponsored actor; while Stuxnet and Flame may have been crafted by governments, they do not differ in sophistication from other advanced malware, and plenty of people outside the classified sphere have access to 0-days. It doesn’t take billions of dollars and government resources to carry out a major electronic attack.

(An aside: while this wasn’t something Henry talked about, one of my biggest concerns for the future is the advancing state of 3D printing, desktop manufacturing, and synthetic biology. You can assemble a synthetic biology lab and genetically engineer organisms in your garage at this point on a budget within the reach of a well-off amateur. While molecular nanotechnology is a long way off still, I have no idea how to deal with a world wherein we have to somehow implement a defense against when fail-once-fail-everywhere existential threats can come from individual nuts anywhere in the world.)

Mr. Henry claims that we have to go from mitigating the vulnerability to mitigating the threat — that is, prevent the attacks from happening in the first place. Just as the FBI had to go from measuring cases and arrests to measuring threat elimination as an international intelligence agency after 9/11, we need to move from trying to set up perimeters to keep attackers off the network to trying to detect and remediate attacks as they happen. We have to assume a breach and plan accordingly. Considering as the progressive decline in the effectiveness of perimeters and the need for distributed defense throughout the network, the enterprise, and the world is the reason behind the name of this blog, that part at least I agree with.

Henry says that the NSA and DHS have the authority and responsibility to protect government and military networks, but no one in government is monitoring the commercial space. This is a contrast from how other countries do things — the United States is unusual in not having an overt industrial espionage program that attempts to advantage local businesses. He extols us — people in the information security sphere — to be proactive in finding out who our adversaries are and sharing that data with the government. Judging by the questions (some of which were amusingly prefixed with “Without using the word ‘cyber’…”) this was not a popular view, for a couple of reasons.

The panel following the keynote (including DT, Jennifer Granick, Bruce Schneier, and Marcus Ranum, all BlackHat alumni from the very first conference) went into these criticisms further. One is of course that “sharing” with the government tends to be a one-way street, which prevents people from viewing them as a partner. The other, however, is that many felt that this is a spectacular abdication of responsibility by the government — Really? We’re supposed to keep the Chinese intelligence community out of our servers? And what are the billions of tax dollars going to the NSA and DHS for, then?

The panel discussion also went into an interesting discussion on what they — as some of the luminaries of information security — believe a CISO should be spending their money on. DT advised them to spend it on their employees, which of course went over swimmingly with this audience. Other advice included to spend on security generalists, not experts in particular tasks or technologies, since narrow expertise is increasingly available via outsourcing, and to focus on detection and response rather than defense and prevention. “The cloud” is going to happen whether CISOs want it to or not, so we have to find a way to have a data-centric model where we know what’s out there and can tell if it’s been tampered with; keeping everything behind walls will not work indefinitely.

A final controversy came up over the issue of government-sponsored hacking like Flame and Stuxnet. DT looked favorably on it, saying that before this there wasn’t really any room between harsh words and dropping 2000-pound bombs; governments having a tool available to them to carry out an attack without killing people is on balance good for the world. Jennifer Granick and Marcus Ranum vehemently disagreed, describing it as a crime against humanity putting civilian infrastructure on the frontlines of a nonexistent war, then telling people to be glad that at least we didn’t blow them up. The world of information security has become the political world; the world has changed such that Internet policy is just policy now.

There was also a keynote interview with author Neal Stephenson, which while entertaining did not really provide any insight so I’m not going to relate it here. If you’re a science fiction fan, though, it’s worth looking up when it inevitably appears on YouTube in a few weeks.

And now, on to the talks. One interesting trend was the recurring theme of attacks on pseudo-random number generators. Dan Kaminsky discussed how PRNG breaks have compromised RSA (about 0.5% of keys were compromised) and Debian OpenSSH, and hardware RNG just isn’t available most of the time. The problem is that our entropy pools are very limited on servers, VMs, the cloud, and embedded devices — we don’t have keyboard or mouse and frequently don’t even have disk or good hardware interrupts. TrueRand (which used an interrupt every 16ms to generate noise) was disavowed by its author but does still work, and Dan presented DakaRand which uses multiple timers and threads to generate noise since multithreaded programming is to a degree nondeterministic (and his algorithm proceeds to SHA-2 hash the noise, use Von Neumann debiasing, Scrypt the results and use AES-256-CTR to turn it all into a stream.) Each call is independent so it’s secure against VM cloning; unfortunately, most developers will just keep calling /dev/urandom.

George Argyros and Aggelos Kiayias proceeded to demonstrate a variety of entropy reduction, seed attacks, and state recovery attacks against random number generators, managing to compromise PHP session cookies and administrative recovery tokens across multiple applications. They went into some detail on the various random implementations on both Linux and Windows and how entropy can be reduced or the seed reverse-engineered; some of them were fascinating (like attacking your own session cookie first to build an application-specific rainbow table.) If you’re up for the crypto math, take a look at their presentation.

Other topics included a resurgence of attacks on NTLM (mostly pass-the-hash and SMB relay variants), using browser exploits to pivot into more traditional infrastructure exploits against routers, the recently-publicized exploits against Windows Vista/Windows 7 gadgets (which were well known and described in Microsoft’s own gadget security whitepaper; in short, it’s easy for a developer to write a vulnerable gadget, so most of them do so), and persistent injection of browser exploits during a MitM session (e.g. on open WiFi.)

However, there was one other talk I’d like to go into some detail on — iSec Partners presented The Myth of Twelve More Bytes around the impact of IPv6, DNSSEC, and the new commercial gTLDs (the top-level domain identifiers like .com and .net) being issued by ICANN. In short, these changes remove artificial scarcity from the Internet in a variety of ways that are not broadly understood, and this is fundamentally changing the architecture of the network because assumptions of scarcity are deeply woven into our current designs.

IPv6 is not just changing IP addresses from 4 bytes to 16. It makes substantial adjustments to the layer 2 through 4 network stacks, removing ARP while modifying TCP, UDP, ICMP, and DHCP and adding new IP protocols like SLAAC (Stateless Address Auto-Configuration.)

ICMP becomes critical infrastructure — you can’t blindly drop it or SLAAC, router discovery, and neighbor discovery stop working. Yet those are unauthenticated protocols, spoofable by anyone on the segment; duplicate address detection is also impractical since it can be spoofed for a DoS. SLAAC eliminates the need for DHCP (though DHCPv6 does exist for procuring additional addresses) by allowing clients to just give themselves an IP with the static SLAAC prefix, ask for a network identifier, and append their own MAC address to get a globally-unique IP. There is also a protocol (RFC4941, Privacy Addresses) for generating additional random addresses since for obvious reasons not everyone wants to have a permanent, immutable, Internet-routeable globally unique IP (it would sure make ad networks’ and trackers’ jobs easier.)

If you’ve not disabled IPv6 entirely on your Windows machines, the fact that you may not be rolling out IPv6 in your network yet may not protect you from IPv6 attacks. SLAAC is the default on Windows, so your Windows machines all have IPv6 addresses already, and the IP stack in Windows 7 and beyond prefers v6 over v4 — your Windows machines are just waiting for something to come along and talk IPv6 to them, and it doesn’t have to be you. Likewise, there are many IPv6-over-IPv4 transition mechanisms, like 6to4, 6rd, ISATAP, and Teredo, and while these can be blocked, you have to do so proactively — they just work otherwise, on your existing hardware and software. Your Windows servers may be speaking to a hacker on the Internet via Teredo right now without your knowledge.

Other interesting implications of IPv6: since unlimited extension headers are allowed, TCP packets may be fragmented before the TCP header — you can’t do stateless filtering on port number! And when it comes to stateful filtering, consider that you can’t keep a full state table (the number of possible source addresses to keep state on is unfathomably large) and attackers can send every packet from a brand new, totally valid IP address — these aren’t spoofed addresses, they can make full connections. Stateful filtering that’s not subject to trivial DoS is going to be a challenge.

The new top-level domain process also causes its share of problems by breaking the Internet’s trust model. And there are going to be a lot of new top-level domains — despite the $185,000 application fee for a top-level domain, Amazon has applied for 76 and Google for 121, and that’s just for “brand” domains they want to reserve entirely for themselves and not run as a registry. (i.e. Google has decided that all “.search” sites should be Google sites; likewise, “.shoes” or “.clothes” domains will all be Amazon’s.) While the process for applying for a gTLD is labyrinthine, there’s no step of the process that tries to judge whether or not issuing a domain is a good idea.

Currently we assume that a trademarked domain likely belongs to its most recognizable holder — paypal.com goes to PayPal. But who does paypal.rugby go to? Paypal.shoes (if it were to exist) goes to Amazon. With 1400 new gTLDs, many of which will be run as public registrars, this assumption no longer holds — “defensive registration” becomes impossible because you can’t even identify all the registries. We assume that IP spoofing is highly limited in full-connection situations, but under IPv6 I don’t need IP spoofing to create unlimited connections. We assume that an individual is highly attached to a few IP addresses (their home, work, proxies, VPNs, etc.) which will no longer hold true. How do we handle browser homeographs when they can exist in 1400 TLDs?

How do we do anti-fraud and adaptive authentication without IP scarcity? What about DDoS prevention, rate limiting, IDS, SIEM, event correlation — even load balancing? How do we do IP reputation? Moving it up to the network level is a problem since one bad actor could DoS their entire network provider. Right now, if you advertise an IPv4 space you don’t own, the people who do own it will notice and complain — but in IPv6, if you own an AS — any AS at all — you can create IP ranges for dedicated tasks, advertise them, then tear them down and leave little evidence they ever existed at all.

For all my complaining about the changing nature of BlackHat (the vendor floor is really the center of the conference now), it certainly brought to my attention some issues that are not yet “common knowledge” in the information security world. We’re already past time to start preparing for these things, as attackers already are.

attacks, crypto, industry, networks, privacy, products, society

DefCon 19, Day 3

Sunday was interesting — this was actually the first DefCon I have attended (and I’ve been to the last five) where Sunday was actually busy. Normally Sunday feels very empty — most people have gone home, and the ones that are still around are too hung over to go to the morning sessions. I was not quite hung over enough to miss the morning sessions, so off I went. I’d imagine a lot of people took advantage of DefCon TV, though.

I started the day with Whit Diffie & Moxie Marlinspike’s Q&A session in Track 1. There was no topic in the program; instead, they just both answered questions about SSL and cryptography. One interesting detail: one of the reasons RSA has become more successful (or at least frequently used) than Diffie-Hellman was that Diffie himself favored it, on account of certain attacks for which RSA is more favorable (though Diffie-Hellman is better against others.) A lot of the discussion, though, was about Moxie’s notary system proposal. I have to give Moxie credit here — though I’m still not sure that I agree with his proposal, I probably spent more time debating it with people than I spent talking about any other presentation this weekend. It certainly spawned a lot of conversation.

Paul Craig’s iKAT tool is always interesting, and he presented a new version. The previous one only attacked Windows kiosks, and now he’s cross-platform. Essentially, the principle is that Internet kiosks are designed with the threat model of defending the kiosk from the user… and not defending it from the Internet. Thus, iKAT is an Internet site that can be used by the user to attack his own machine, under the assumption that his own machine is some sort of locked-down Internet kiosk with restricted permissions. iKAT allows the user to take full administrative control of most of them, either just to get unrestricted Internet orb, if he’s less friendly, to Trojan the card-reader.

Next, Alva Duckwall presented A Bridge Too Far, a talk on bypassing 802.1x via creating a layer-2 transparent bridge. This was actually a rather cool talk, and coupled very well with yesterday’s talk on exploiting hotel VoIP via VLAN-hopping by cloning the phone. With all the focus being on Layer-3 protocols these days, it’s cool to see that you can still do some interesting stuff at Layer-2.

There was a talk in the afternoon on bit-squatting — essentially, a binary version of typosquatting wherein you register a domain that’s a 1-bit error off from a legitimate domain, not intending to catch user error but rather to catch hardware and network errors. 1-bit errors are fairly common, at least when multiplied by billions of Internet users. I didn’t attend the talk because I felt that all the interesting material was basically contained in the title — the moral of the story is going to be that you should probably register the 1-bit-off domain names of your own if you’re going to create a highly-targeted site like a banking site. Talking to people who did attend… the consensus was that it shouldn’t have been a 50-minute talk.

Instead, I visited datagram’s talk on tamper-evident devices. Most of them, well, aren’t tamper-evident, at least not against a skilled attacker. The attacks range from very obvious (stretching plastic, razoring up adhesive) to requiring more knowledge (dissolving adhesive with a wide variety of organic and inorganic solvents) to very clever. Note that during the Tamper Evident contest at DefCon, wherein people tried to bypass a wide variety of anti-tampering seals and devices… none of the seals or devices successfully resisted attack.

I followed this up with a talk by the DefCon NOC on Building the DefCon network. It’s an interesting challenge — building a high-bandwidth network, wired and wireless, for use by 12,000 people, many of whom will be actively attacking it, given only 3 days, using only hardware you can afford to keep in a box 51 weeks of the year. Considering their constraints they do a remarkable job. This year’s secure wireless was, so far as anyone could tell, actually secure… and possibly safer than using GSM or CDMA in this environment (GSM is definitely broken, and the not-quite-confirmed rumor is that CDMA users were hit by an 0day MitM this year, too.) DefCon TV was a huge hit, even though it did not successfully reach all rooms.

The last talk of the day was Jayson Street’s dramatically-titled “Steal Everything, Kill Everyone, Cause Total Financial Ruin!” It was sometimes amusing, but overall it was mostly a self-aggrandizing pentester talking about various (mostly physical) exploits he had pulled off. Not really any valuable content for a security pro, though your average non-security person would probably be shocked at how trivially exploitable most systems are.

Having spent pretty much the whole weekend at DefCon events, I decided to go back down to the Strip, see a show, and have some delicious steak frites and wine at the Paris. It was a nice ending to a packed weekend.

Overall, DefCon this weekend was a huge success (I’m making a note here.) The Rio was a great environment, much better than the Riviera, with enough room to grow and real food to eat. Staying in the conference hotel and having a group to enjoy DefCon with made it a much more fun experience than past years; both will be things I’ll be sure to repeat. (Incidentally, Google Plus is a great tool for attending a con with a group — it’s like having your own private Twitter — though I can’t say that I have found much else it’s good for yet.) Speaking of Twitter, while it’s been indefensible for DefCon in prior years, at this point since everyone has a smartphone and a Twitter account the #defcon hashtag actually has so much traffic it’s almost impossible to keep track of. Every time you bring it up there are hundreds of new tweets.

I think the new non-electronic badges were a success. While perhaps less “cool” than the electronic ones, far more people participated in the badge contest this year than have ever participated in hacking the electronic badges, and while badge lines did run 2-3 hours, at least they were available before the con started. At some point, DefCon management needs to learn that the conference is growing 10%+ per year and that they need to order enough badges for growth; considering the much lower cost of non-electronic badges, perhaps they’ll do that next year. The lines are entirely unnecessary — they exist only because everybody knows that badges have been under-ordered and people at the back of the line won’t get one. Without this pressure to get badges first, the infamous LineCon could be avoided.

DC303 and Rapid7 threw great parties. However, most of the fun I had was around the Rio pools — having them open until 2am was great, though even later would be nice (and allowing alcohol instead of having everyone smuggle it in would be an improvement, though I’m not holding my breath on that one.) Finally, thanks to DC206 for a great time, a lot of very interesting conversation, and confusing the hell out of taxi drivers.

attacks, hardware, networks, physical security, products

DefCon 19, Day 2

I slept in a bit on Saturday and missed the 10am panels. None of them seemed very relevant to me, though now I kind of regret missing the first panel. Apparently the former CEO of HBGary Federal, Aaron Barr, was scheduled to speak, but his former employer threatened him with a lawsuit, so at the last minute he was replaced with the mysterious masked pirate Baron von Arr. I’m certain no one has any idea who he might have been. I was also unable to make it to Schuyler Towne’s DIY Non-Destructive Entry talk on bypassing locks and doors, which is unfortunate as Schuyler is and interesting speaker; this is another one I’ll be sure to catch on video.

Mycurial gave an overview of High-Frequency Trading systems in the next talk. These are the systems by which computers trade stocks and other investments with other computers, as a form of arbitrage — they offer things for sale to fulfill trades before they actually have the items in question, then quickly buy them. It’s a speed game, with latency measured in nanoseconds, such that distance between the trader and the exchange matters (light can only go 11 feet per nanosecond, after all, so a few hundred yards might put you behind another trader, resulting in a loss.) As a result, conventional security measures are practically nonexistent. Networks run on custom, non-standards-compliant TCP/IP and Ethernet stacks. Firewalls and IDSs, which can add latency in microseconds, are absolutely prohibitively slow. These networks are “dedicated,” but these days no network connections are truly dedicated — leased lines are still packet switched and trunked. If someone managed to find their way into one of these networks they could do a lot of damage. For that matter, who’s to say the traders aren’t subtly attacking each other? We still don’t know for sure what caused the May 6th Flash Crash.

I did not manage to catch Richard Thieme’s Staring Into The Abyss at either BlackHat or DefCon, which is unfortunate; many attendees said it was the best talk of the conference. This will be another one to catch on video.

I went to a talk on the Metasploit vSploit Modules, which are modules intended to test IDSs, WAFs, and other network monitoring and filtering technology. Pretty neat code, but not really relevant to my interests.

Gus Fritchie’s Getting Fucked On The River explored vulnerabilities in online poker servers, and the arms race between cheaters and the poker sites’ attempts to stop them. There have been a host of exploits, from a predictable random number generator (if you seed your card-shuffling algorithm with a 32-bit number, there are only 4 billion possible decks of cards, which means someone can essentially build a deck rainbow table and predict draws with great accuracy), to back-door “cheat detection” code that actually leaked hole cards to an insider, to poker bots that play well enough to beat average players (and can beat even skilled players if many of them collude together, or be used to launder money.)

A talk called VoIP Hopping The Hotel was one of the very few technical exploit talks I saw at DefCon this year. Luxury hotels are starting to put VoIP phones in rooms, using the same Ethernet lines as the in-room Internet. If you plug into the phone’s port, though, you see nothing on the network, and can’t get an IP — 802.1q VLAN trunking is used so the phones exist on a different virtual network than the Internet connections, and only the phones can see it. Now, properly used, 802.1q trunking is secure… but “properly used” means never allowing an untrusted user access to a “trunk port” (a single port which hosts multiple VLANs.) Since the hotel port does just this — both the VoIP VLAN and the Internet VLAN — it’s possible to use some tools demonstrated in this talk to gain access to the VoIP VLAN with a computer, puzzling out the VLAN ID for the VoIP VLAN and cloning the phone’s MAC and IP addresses. It takes some skill — send one wrong packet on the VoIP VLAN and you’ll trigger port security and get the whole connection shut down at the switch — but with proper tools isn’t very hard. So why would you want to be on the VoIP VLAN? Well, network designers tend to be lazy… and that VLAN tends to be the hotel’s internal network.

Finally, This is REALLY Not The Droid You’re Looking For was another good exploit talk. On Android devices, it’s possible to craft an application that uses only common permissions (“Read Phone State”) and uses only “safe” APIs (meaning automatic approval for publication in the Android Market) that spawns a service that watches for a specified list of apps, and (upon seeing one) foregrounds itself silently over the app in question. So someone can make a game which, after you have played it once, silently lies in wait and when I load up Facebook, or my bank’s app, or my password manager, pops up a fake login screen over the real one and intercepts the password. As a user, there is no defense and no detection; there may be no fix for this short of a significant overhaul of Android’s UI APIs and permissions.

Also back this year (for the first time in many years) was DefCon TV — the talks were broadcast over the hotel’s internal cable system to all the rooms. So when a talk filled up, you could just go back to your room and watch it there if you were staying in the Rio. It was quite convenient, though in some rooms (including mine) not all 5 tracks were available. Still, according to the DefCon Goons this helped a lot with crowding, since many people would watch talks from their rooms and only come down to the conference floor for more social activities.

For the evening, I met up with the DC206 group again, ate over at the Gold Coast hotel, and then dropped into the IOActive Freakshow (yet another pool party), followed by the DC303 party (featuring Dual Core and C64, playing a mostly drum-and-bass set in lieu of the usual nerdcore, albeit still with some rapping) and finally the DefCon White Ball (with Miss Jackalope playing more drum-and-bass.) There was a lot of dancing and not a small amount of drinking, with the usual discussion of hacking, infosec, and reasons to make a Tesla coil out of DefCon badges. All in all, it was another good night.

attacks, industry, networks, products, risk

DefCon 19, Day 1

Having finished with BlackHat, I checked out of the Flamingo and moved to DefCon’s new location this year, the Rio. This was an enormous upgrade from the Riviera, the previous location. For one, the conference center is nearly 50% bigger, and it’s beautiful. Traffic flow was greatly improved, despite record attendance (~12,000, from estimates I’ve heard, up 20% from last year.) It was crowded, but it was a manageable crowd, and I managed to get into everything I wanted to, save for a talk in Track 2 (by far the smallest of the 5 presentation rooms.) What’s more, the DefCon Goons improved things as the conference went along (they always do), so Saturday went even better than Friday.

I started the first day with 1o57’s talk on the new DefCon badge. This year’s badges were non-electronic (for the first time in several years) — they were antiqued titanium discs with the Eye of Ra and various codes inscribed in them with a water knife. Apparently making the 10,000 DefCon badges actually used the entire supply of sheet titanium in the United States at the time. Bright side of them being non-electronic: they actually had them before the con started! There has been a history of the badges getting hung up in customs on the way from China, but the non-electronic badges were produced in the USA. 1o57 designed an elaborate puzzle contest around the badges, but I can’t say much about it as I didn’t participate this year. There was, however, a very nice-looking code wheel on the floor of the Rio convention center rotunda that was key to the game and gave the room a nice DefCon look, so it was appreciated even by non-participants.

I spent the next couple of hours exploring the non-talk aspects of DefCon (none of the sessions in those slots were particularly interesting to me) and bought up some DefCon shirts and a couple of 2600 Hacker Calendars. I also donated $170 to the Electronic Frontier Foundation in my name and my wife’s, though I didn’t actually end up going to the party to which that entitled me admission (the donation and not the party was the primary purpose anyway.)

I dropped into Mark Weber Tobias’s physical security talk, called Insecurity: An Analysis Of Current Commercial And Government Security Lock Designs, which involved some hilarious attacks on “high-security” physical locks. You know those locks with 5 vertically-arranged pushbuttons you see in every airport or government building? They pop right open if you stick a neodymium-iron-boron magnet on the side. A keycard/keypad electronic lock with a USB port on the bottom for reprogramming is impervious to electronic attacks… but opens if you shove a paperclip to the back of the USB port. This sort of attack was ubiquitous — simple modifications that made sophisticated electronic locks open in purely mechanical ways. The overall point is that to get through a door, you do not have to open the lock — you have to actuate the mechanism that the lock actuates. Sometimes this is really easy.

The next talk was entitled Why Airport Security Can’t Be Done FAST, about the TSA’s Future Attribute Screening Technology. This project intends to detect malicious intent, based on biometrics and facial cues, kind of like an electronic Cal Lightman. The problem, in short, is the standard Bayesian statistical issues that always come up when trying to detect something vanishingly rare like terrorism. The top 10 airlines in the world carry a billion passengers per year — the top 5 US carriers alone carry 500 million per year. How many of these are terrorists who actually intend to blow up a plane that flight? Let’s be very conservative and pretend 100 people try to board an American plane with the intent to blow it up every year (probably an enormous overestimate.) Now let’s imagine my FAST system is 99.9% accurate at detecting terrorists — sounds great, doesn’t it? Let’s get that into our airports immediately! But wait… 99.9% accurate means it will probably catch all 100 terrorists. It’ll also catch 500,000 innocent people — 0.1% of the 500 million passengers. So if FAST points you out as a terrorist, there’s a 0.0002% chance it’s right! Due to the base rate fallacy, a 99.9% accurate terrorist detector’s alarms are false positives 99.9998% of the time. Oops.

What do you bet the real FAST isn’t 99.9% accurate, either?

I next attended the EFF Year in Civil Liberties panel for a summary of legal issues in information security, privacy, and free speech. This was followed by the Hackerspace Panel, about hackerspaces and DefCon groups around the country and what they do to encourage innovation and bring hackers, makers, and other interested people together. Both panels went very well, especially given that the Q&A nature of panels often makes them hit-or-miss.

Friday night at DefCon is surprisingly free of events — about all that’s going on is the Black Ball and the DefCon Pool Party. I met up with the DC206 group again, had some dinner, and mostly hung out at the pool party for the evening and discussed the day’s events and other topics in hackerdom. Frankly, talking about interesting topics (in a hot tub outside with DJs spinning techno in the background, no less) beats most parties anyway.

industry, physical security, privacy, risk, society, statistics, terrorism