Fingerprint Login and Authentication

With Apple’s introduction of Touch ID for the new iPhone 5S, there’s been a lot of news coverage of their new fingerprint-based unlock system — and not just about its usefulness for cats. People want to know: is it secure? Can someone bypass it? Within moments of its release there was already a sizeable bounty being offered to someone who could “break” Touch ID. Of course, the Chaos Computer Club demonstrated a bypass in under a week.

But the thing about fingerprints is that they’ve been easy to bypass for more than 20 years. It’s not that hackers have figured it out “already”, rather spies figured it out decades ago. You dust the fingerprint, photograph the pattern, print it out with an impact printer (or, in a pinch, a laser printer with the toner on the heaviest setting to leave raised printing), pour plain old Elmer’s glue on it, let the glue dry until firm but not quite solid, and peel it off. Presto! Prosthetic fingerprint.

The problem with how fingerprints are being used is that fingerprints are a form of identification, not authentication. They quickly say who you are, but they don’t prove who you are — essentially, when trying to translate the traditional username/password paradigm to biometrics, a fingerprint is like a username, not like a password. Unfortunately, it’s being used as a password. It’s especially funny on the new iPhone because they’re using fingerprints to authenticate to a touchscreen device — that is, an object that has your fingerprints all over it! If someone wanted into such a phone it would be really easy to lift the user’s fingerprints off the screen, create a prosthetic, and unlock the device with the fingerprint reader. You can’t make a secure authentication method out of something that people leave everywhere.

On the other hand, I can’t bring myself to care that much. There’s a general rule in computer security: “If the adversary has unrestricted physical access to your computer, it’s not your computer.” If someone’s trying to bypass fingerprint lock on a phone, then they must have possession of the phone — and in that case there are many ways in, whether it’s locked with a fingerprint, a PIN, a password, or whatever. Fingerprint is more convenient than PIN and probably approximately as secure as a PIN. In either case, if the device storage isn’t encrypted getting access to it is trivial, and if it is encrypted the capability to perform an offline attack (a capability you have in a stolen-device scenario) means that bypassing a 4-digit PIN is equally trivial. You’re not really losing much, if any, security by going to a fingerprint.

The other problem with fingerprints as passwords — aside from the fact that you leave them everywhere — is that your fingerprint can’t be rotated. If your password gets stolen, you can change your password, but if your fingerprint is stolen, it’s stolen forever. There’s no way for you to change it. This is fine for an identifier (username), but not fine for an authenticator (password) — it puts you in the situation of “break once, break everywhere.” Once your fingerprint has been stolen by an adversary, they have it for the rest of your life. This also why fingerprints (or any biometrics) should never be used to generate cryptographic keys.

You’ll find fingerprint readers on a lot of enterprise-model laptop computers, too. On these, the fingerprint reader is just an alternate authenticator to Windows, so Windows will still let you log in with your password if the fingerprint reader doesn’t work. It does (by design) reduce your security a bit — but once again, not much, because if someone is trying to break in via the fingerprint reader then they must have physical possession of your computer, and they’re going to get in anyway. The only protection against that is to enable BitLocker in PIN mode — that is, full-disk encryption with a PIN code required at power-on to decrypt the hard disk, and even then you’re only really safe if your computer comes with a TPM (which most business laptops do, but most other PCs do not.) Most people don’t do this, which means fingerprint or password, your data is easily accessible to someone who has possession of your PC.

So all told, there’s not much reason not to use fingerprint unlock on a phone, since phone unlock is not normally a boundary where we expect much security (as our usual mechanisms — either “swipe to unlock” or a 4-digit PIN code with unlimited guesses allowed — provide very little security anyway.) But from a systems design perspective, if you want real security, fingerprint should not be treated as an authenticator, regardless of the technology being employed.

authentication, hardware, industry, risk

The Blade Itself Incites to Violence

First we find out Verizon has been essentially running a pen register on its entire customer base for three months, under a FISA court order. Then we find out it was a renewal – given that the FISA court has approved some 38,000 warrants and denied only around 130, I don’t believe there’s any reason not to believe that the FISA court approves a pen register on every US phone company every three months.

And then Edward Snowden turns the NSA’s terrible PowerPoint slides (seriously, could they put any more flag and eagle clip art in there if they tried?) over to the Guardian, and it looks like PRISM has direct access to every record of customer data in ten major Internet service companies. Quickly PRISM overtakes the Verizon scandal in attention.

What are we to make of this? A tempest in a teapot, or that the United States has already gone over the edge into a police state? The mainstream media certainly promulgates both views — and Congress has given them plenty of ammunition to do so, with Snowden called whistleblower, hero, criminal, or traitor depending on who’s giving the sound bite.

Of course, all the major Internet companies — Microsoft, Google, Facebook, etc. — have claimed to have no knowledge of PRISM, and not to be party to any worldwide NSA-led spy ring. As someone who works in security at a major Internet company, frankly, I believe them. Which is to say that I believe that spokesperson has no knowledge of PRISM and genuinely believes his employer is not party to any worldwide NSA-led spy ring. But these companies have criminal compliance teams — groups whose role is to liase with law enforcement around the world, and to determine which requests, subpoenas, and warrants to quietly obey and which to resist. These criminal compliance teams operate in secret, necessarily — it’s often outright illegal for them to share the requests they receive (the USA PATRIOT Act’s National Security Letters come with gag orders attached), and even if it’s not, it’s bad practice. Most of the time they’re assisting in the investigation of bona fide bad people, child pornographers and fugitive murderers and the like, and talking too much jeopardizes the investigation. Criminal compliance people are law enforcement people — they’re Lawful Good, they believe in what they’re doing, and generally rightfully so. They may care passionately about civil liberties, and they may push back on overreaching requests, but ultimately they believe in the power of government to do good, just as legislators do, or they wouldn’t be in that career — and that career requires a culture of secrecy. They don’t talk, and their managers don’t ask, because that’s their job. So the spokespeople at Microsoft and Google and Facebook and so on are telling the truth — they’ve never heard of PRISM, they don’t know about any NSA spy ring. And yet that means very little; they wouldn’t have heard of it, and they wouldn’t know about it, and the people who do won’t say. It’s their job not to say, and the great majority of the time, we as a society should be glad they’re doing their job. They put people like this guy in jail.

PRISM is probably not a spying system per se. It’s a glorified reporting layer — it presents to intelligence agents in usable form the intelligence the NSA has already collected, and allows them to easily request more. Those requests go through the usual due process, getting sent to some Internet company with an order from the FISA court. PRISM probably isn’t directly tied into the core systems of the Internet’s largest companies… but it indirectly is, by way of any number of other applications and processes, both technical and legal. Maybe even those criminal compliance teams have never heard of PRISM… they’ve heard of a few National Security Letters, and a few dozen warrants, and a few hundred subpoenas, and each one alone made sense, yet all of the data from all of them went into the NSA’s great oracle, and the whole is greater than the sum of its parts.

I will give the administration one thing: there’s no evidence that the data from PRISM is being abused. PRISM knows about your Google searches, it knows about your email’s contents, it knows all the little felonies and misdemeanors you’ve committed. And make no mistake, you have committed them: our legal code has become so labyrinthine, everyone is a felon — as Cardinal Richelieu said, “if you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him.” When even copyright infringement is a criminal offense, when the Computer Fraud and Abuse Act makes violating website terms of service (you haven’t read them) a felony, a prosecutor with the will and the political support can prosecute anyone. Yet… they don’t. The NSA isn’t turning over everyone’s drug purchases and porn habits and music downloads to local district attorneys — it doesn’t look like they’re turning over anyone’s. They’re using it to look for terrorists, because that’s their charter, and nothing more. Obama’s not lying when he says the program has thorough oversight and is carefully targeted.

Allow me to take a digression here. The Transportation Security Agency was established to secure the nation’s transportation system against terrorism. Their charter is very clear: strengthen the security of the nationís transportation systems and ensure the freedom of movement for people and commerce. The TSA’s charter, notably, is not to wipe out drug trafficking, or to prevent smuggling, or to enforce customs laws, or to prevent illegal immigration. And thus it does not try to do these things: its rules are all regarding weapons and explosives, not drugs or contraband (those drug-sniffing dogs are CBP, not TSA), and its security measures aimed at that target. Sometimes they may be ridiculous — X-ray scanners that can’t detect objects placed at your sides, say, or the constant “preparing for the last war” of shoe removal and liquid bans — but their aim is clear even if the shots are wild. Thus, it was no surprise that they recently decided to stop screening for small knives, golf clubs, multi-tools, and other minor weaponry. These items are no threat to the security of an aircraft — any weapon that can’t threaten more than one person at a time isn’t. No one with a knife is going to get through a cockpit door, and even if they take a hostage they’re not likely to kill more than one person — tragic for that one person, to be sure, but no threat to the aircraft, much less the transportation system. The TSA wanted to focus on threats to the aircraft — bombs, guns, and the like.

Yet the flight attendant’s union objected (naturally — they’re the ones who will get stabbed with those small knives, hit with those golf clubs, etc. and the safety of the transportation system is, to them, little consolation), and some opportunistic members of Congress latched on and threw a fit. How dare the TSA not stop an obvious threat? It didn’t matter that it’s not the TSA’s mission to stop that threat. The TSA is beholden to Congress, Congress is driven by public opinion, public opinion is driven by the media, and the media is driven by fear, because fear gets ratings. Fear sells, so it owns the media, which owns the public, which owns Congress. So now the TSA has backed off from their threat — they can stop a drunkard with a pocket knife, so they must stop a drunkard with a pocket knife. Never mind that it’s not their charter, that it has nothing to do with the safety of the transportation system, that it’s unrelated to terrorism or homeland security.

Maybe Obama’s right — maybe PRISM isn’t really a threat, just a reporting system, and maybe the NSA, despite the fact that a random analyst “sitting at my desk certainly had the authorities to wiretap anyone from you or your accountant to a Federal judge to even the President” isn’t abusing that power. Like the criminal compliance employees at major Internet companies, people working for the NSA are by and large loyal American citizens who perform their role because they believe in it, and because they know they’re doing good for their country. They swear an oath to uphold the Constitution, and that includes the Fourth Amendment. In any case, NSA surveillance is absolutely inadmissible in court for domestic crimes; FISA orders are only valid for, as the name implies, foreign intelligence.

But what happens when the media turns its attention to something other than terrorism? What happens when public opinion gets incited against something else — something evil, of course, but nevertheless something outside the NSA’s purview? What happens when the public’s fear turns from terrorism to human trafficking, or child abduction, or illegal immigration, or foreign cyber-attacks, or “hackers,” or corrupt bankers? The NSA has the evidence to catch these people — Congress will demand action. We have a hundred thousand spies now: they have the capability, they have the information. The law will change; maybe not now, maybe not for a decade, but if don’t strangle this right now, it will change. Even if every word the President and General Alexander says is true, it cannot remain true as long as these capabilities continue to exist and grow — we know exactly where this road leads. They can do it, so they must: as Homer said, the blade itself incites to violence.

legal, privacy, society, terrorism

South Carolina Hack Attack Root Causes

Recently, the South Carolina Department of Revenue was hacked, losing tax records on 3.6 million people — that is, most of South Carolina’s population. These contained Social Security numbers at the very least, as well as 3.3 million bank account numbers, and may have been full tax returns (they haven’t said.)

There’s been the usual casting of blame after such an incident, but it’s quite interesting to read over the incident response report they had Mandiant prepare for them. Despite being “PCI-Compliant”, they had a number of vulnerabilities that let the hackers break in. But what could they really have done to protect themselves? From the report, the attacker went through 16 steps:

1. August 13, 2012: A malicious (phishing) email was sent to multiple Department of Revenue employees. At least one Department of Revenue user clicked on the embedded link, unwittingly executed malware, and became compromised. The malware likely stole the userís username and password. This theory is based on other facts discovered during the investigation; however, Mandiant was unable to conclusively determine if this is how the userís credentials were obtained by the attacker.

It’s not clear here if this was untargeted spam phishing with off-the-shelf malware, or a spear-phishing attack on the DOR with custom malware. If it’s the former, then this would have been prevented by any decent mail security product (to block spam and phishing) and desktop anti-malware software with current signatures & centralized monitoring. Since I would think any “PCI-Compliant” institution would have this, my guess is that this was a spear-phishing attack. The unfortunate fact is that there’s basically nothing you can do about spear-phishing and targeted malware; by its nature it evades automated detection, and security awareness training is of limited effectiveness against a phishing mail customized for your employees. So far there’s no sign that the state DOR screwed up here.

2. August 27, 2012: The attacker logged into the remote access service (Citrix) using legitimate Department of Revenue user credentials. The credentials used belonged to one of the users who had received and opened the malicious email on August 13, 2012. The attacker used the Citrix portal to log into the userís workstation and then leveraged the userís access rights to access other Department of Revenue systems and databases with the userís credentials.

And right here in step 2 I think we’ve found the root cause of the attack. They had an external remote access service that allowed single-factor login — coming in through the perimeter from the Internet using only a password. Given that spear-phishing & targeted malware are not preventable, you have to assume that passwords will be stolen and have barriers in place to keep password-bearing attackers out; two-factor auth on remote access services should be a bare minimum, whether that’s SecurID tokens, smart cards, or other mechanisms.

3. August 29, 2012: The attacker executed utilities designed to obtain user account passwords on six servers.

Dumping the LSA secrets requires administrative privileges. It’s possible the credentials the attacker acquired in step 1 were administrative on some servers, in which case there’s no new exploit here. But if they weren’t, the attacker elevated privileges in some way, implying that the DOR might have had a patch-management problem. Once again, though, it’s not clear that there’s much they can do about it — patching inside of 30-60 days is actually very difficult for an enterprise of decent size, even a mature, technically competent one. If the attacker used a recent exploit, then the DOR might well have been no worse-off patching-wise than everyone else is. On the other hand, if they used something ancient, this might be another problem by the DOR. This said, with proper authentication on the remote access service, the attacker shouldn’t have even gotten this far.

4. September 1, 2012: The attacker executed a utility to obtain user account passwords for all Windows user accounts. The attacker also installed malicious software (ďbackdoorĒ) on one server.

At this point the attacker is a domain administrator; if he’s dumping “all Windows user accounts” he’s got at least a network login on the domain controller. Chances are that a domain admin had logged onto the first compromised server at some point, and thus the attacker captured his cached credentials. No new attacks or exploits here.

5. September 2, 2012: The attacker interacted with twenty one servers using a compromised account and performed reconnaissance activities. The attacker also authenticated to a web server that handled payment maintenance information for the Department of Revenue, but was not able to accomplish anything malicious.
6. September 3, 2012: The attacker interacted with eight servers using a compromised account and performed reconnaissance activities. The attacker again authenticated to a web server that handled payment maintenance information for the Department of Revenue, but was not able to accomplish anything malicious.
7. September 4, 2012: The attacker interacted with six systems using a compromised account and performed reconnaissance activities.
8. September 5 – 10, 2012: No evidence of attacker activity was identified.
9. September 11, 2012: The attacker interacted with three systems using a compromised account and performed reconnaissance activities.

Nothing interesting here. Very few enterprises could have detected the above; it would require the sort of aggressive NIDS with extensive monitoring that’s normally only found in classified environments.

10. September 12, 2012: The attacker copied database backup files to a staging directory.
11. September 13 and 14, 2012: The attacker compressed the database backup files into fourteen (of the fifteen total) encrypted 7-zip archives. The attacker then moved the 7-zip archives from the database server to another server and sent the data to a system on the Internet. The attacker then deleted the backup files and 7-zip archives.

This was a database exfiltration of over 8 gigabytes of data. This is actually one thing that NIDS could be effective against if tuned properly.

The remainder of the attack steps were just some more reconnaissance, backdoor testing, and other probes, followed by Mandiant shutting down the attacker’s entry point.

The interesting thing here is that assuming this was spear-phishing with targeted malware, the only mistakes the DOR seems to have made were insufficient IDS tuning (which is honestly usually high-effort, low-payoff security work) and having single-factor remote access (which is catastrophic.) There’s nothing in this report that makes it look like the DOR’s IT department was run by a gang of idiots (like in, say, last year’s many Sony attacks); it looks like an organization that was doing most things right but had failed to deploy two-factor remote access. I’d wager their IT security guys wanted to, too, but were blocked by either the inconvenience to users or the cost of rolling out tokens or smart cards.

Having spent more than $14 million recovering from this incident, I’d bet two-factor auth is looking pretty cheap now.

attacks, mitigations, risk

BlackHat USA 2012

As those of you still reading have probably noticed, I took a rather long hiatus from blogging. However, since my last published posts were a recap of BlackHat and DefCon in 2011, this seems like a great place to start up again! So, without further ado, a trip report:

This year I’ve decided to make a departure from the talk-by-talk trip reports I’ve done in the past. Most of the interesting presentations are already online (the whitepapers and slide decks, at least) and I’ll link to them here, but overall this was a very interesting year in information security and I think the gestalt and the keynotes are more important than the specific exploits demonstrated.

BlackHat has changed from what it was five years ago. The criticism that it’s turned into a “vendor dog-and-pony show,” while harsh (it’s still worlds better than the RSAConference ) has some truth to it — the security presented at BlackHat these days is mostly the kind that comes in a box and slides into a server rack. However, the reputation and importance the conference has long had still draws some interesting speakers, who will often put off revelations for weeks or months to be able to reveal them at what is still the world’s #1 professional security conference. Nevertheless, as someone whose occupation is in secure design, architecture, and development, without some significant changes I think that the time of my attending BlackHat every year is coming to a close. (Also, to RSA, McAfee, and a few other vendor offenders: “booth babes” are really tacky at a professional conference. BlackHat isn’t an auto show, it offends a fair number of attendees, and it strains credulity to imagine that anyone purchases, say, cryptography hardware based on the hot girl at your booth who couldn’t actually answer any questions about your product. Considering as in the past three years I’ve not seen this marketing practice responded to with anything but ridicule I’m kind of amazed you keep it up.)

DT (The Dark Tangent, Jeff Moss) introduced the conference as always, and this year’s Day 1 keynote was retired FBI Assistant Director Shawn Henry. With his time on the Homeland Security Advisory Council and current role in ICANN’s byzantine and unaccountable bureaucracy, DT often seems to have much “gone native” in government; this was at least the third consecutive year of a government keynote dropping “cyber” into every third sentence. DT’s intro was surprising, though — he brought up the Strikeback firewall from 15 years ago (a programmable firewall that could DoS your attackers, should you happen to feel like programming your corporate network carry out automated felonies) and observed that you can counterstrike attackers with lawsuits, diplomacy, or direct action. A mostly-favorable view of counter-hacking was not a viewpoint I’d heard publicly expressed in years, especially from someone with government connections. Shawn Henry presented a rather militarized view of the information security landscape, going so far as to call computer network attacks “the #1 threat to global security.” Seems hyperbolic to me, but at least it shows they take the problem seriously.

On one hand, Henry showed considerable insight into the scope of the problem — better than weíve historically seen from government (many previous BlackHat government keynotes have been laughable.) While he claims that the vast majority of hacking & data breaches happen in the classified environment where we never hear about it — a claim I find dubious just due to the sheer difference in scale between the classified environment and the Internet, but can’t wholly rule out either — he recognizes that the “cyber domain” is a great equalizer. A sophisticated organized-crime group or circle of motivated hackers does not differ meaningfully in capability from a state-sponsored actor; while Stuxnet and Flame may have been crafted by governments, they do not differ in sophistication from other advanced malware, and plenty of people outside the classified sphere have access to 0-days. It doesn’t take billions of dollars and government resources to carry out a major electronic attack.

(An aside: while this wasn’t something Henry talked about, one of my biggest concerns for the future is the advancing state of 3D printing, desktop manufacturing, and synthetic biology. You can assemble a synthetic biology lab and genetically engineer organisms in your garage at this point on a budget within the reach of a well-off amateur. While molecular nanotechnology is a long way off still, I have no idea how to deal with a world wherein we have to somehow implement a defense against when fail-once-fail-everywhere existential threats can come from individual nuts anywhere in the world.)

Mr. Henry claims that we have to go from mitigating the vulnerability to mitigating the threat — that is, prevent the attacks from happening in the first place. Just as the FBI had to go from measuring cases and arrests to measuring threat elimination as an international intelligence agency after 9/11, we need to move from trying to set up perimeters to keep attackers off the network to trying to detect and remediate attacks as they happen. We have to assume a breach and plan accordingly. Considering as the progressive decline in the effectiveness of perimeters and the need for distributed defense throughout the network, the enterprise, and the world is the reason behind the name of this blog, that part at least I agree with.

Henry says that the NSA and DHS have the authority and responsibility to protect government and military networks, but no one in government is monitoring the commercial space. This is a contrast from how other countries do things — the United States is unusual in not having an overt industrial espionage program that attempts to advantage local businesses. He extols us — people in the information security sphere — to be proactive in finding out who our adversaries are and sharing that data with the government. Judging by the questions (some of which were amusingly prefixed with “Without using the word ‘cyber’…”) this was not a popular view, for a couple of reasons.

The panel following the keynote (including DT, Jennifer Granick, Bruce Schneier, and Marcus Ranum, all BlackHat alumni from the very first conference) went into these criticisms further. One is of course that “sharing” with the government tends to be a one-way street, which prevents people from viewing them as a partner. The other, however, is that many felt that this is a spectacular abdication of responsibility by the government — Really? We’re supposed to keep the Chinese intelligence community out of our servers? And what are the billions of tax dollars going to the NSA and DHS for, then?

The panel discussion also went into an interesting discussion on what they — as some of the luminaries of information security — believe a CISO should be spending their money on. DT advised them to spend it on their employees, which of course went over swimmingly with this audience. Other advice included to spend on security generalists, not experts in particular tasks or technologies, since narrow expertise is increasingly available via outsourcing, and to focus on detection and response rather than defense and prevention. “The cloud” is going to happen whether CISOs want it to or not, so we have to find a way to have a data-centric model where we know what’s out there and can tell if it’s been tampered with; keeping everything behind walls will not work indefinitely.

A final controversy came up over the issue of government-sponsored hacking like Flame and Stuxnet. DT looked favorably on it, saying that before this there wasn’t really any room between harsh words and dropping 2000-pound bombs; governments having a tool available to them to carry out an attack without killing people is on balance good for the world. Jennifer Granick and Marcus Ranum vehemently disagreed, describing it as a crime against humanity putting civilian infrastructure on the frontlines of a nonexistent war, then telling people to be glad that at least we didn’t blow them up. The world of information security has become the political world; the world has changed such that Internet policy is just policy now.

There was also a keynote interview with author Neal Stephenson, which while entertaining did not really provide any insight so I’m not going to relate it here. If you’re a science fiction fan, though, it’s worth looking up when it inevitably appears on YouTube in a few weeks.

And now, on to the talks. One interesting trend was the recurring theme of attacks on pseudo-random number generators. Dan Kaminsky discussed how PRNG breaks have compromised RSA (about 0.5% of keys were compromised) and Debian OpenSSH, and hardware RNG just isn’t available most of the time. The problem is that our entropy pools are very limited on servers, VMs, the cloud, and embedded devices — we don’t have keyboard or mouse and frequently don’t even have disk or good hardware interrupts. TrueRand (which used an interrupt every 16ms to generate noise) was disavowed by its author but does still work, and Dan presented DakaRand which uses multiple timers and threads to generate noise since multithreaded programming is to a degree nondeterministic (and his algorithm proceeds to SHA-2 hash the noise, use Von Neumann debiasing, Scrypt the results and use AES-256-CTR to turn it all into a stream.) Each call is independent so it’s secure against VM cloning; unfortunately, most developers will just keep calling /dev/urandom.

George Argyros and Aggelos Kiayias proceeded to demonstrate a variety of entropy reduction, seed attacks, and state recovery attacks against random number generators, managing to compromise PHP session cookies and administrative recovery tokens across multiple applications. They went into some detail on the various random implementations on both Linux and Windows and how entropy can be reduced or the seed reverse-engineered; some of them were fascinating (like attacking your own session cookie first to build an application-specific rainbow table.) If you’re up for the crypto math, take a look at their presentation.

Other topics included a resurgence of attacks on NTLM (mostly pass-the-hash and SMB relay variants), using browser exploits to pivot into more traditional infrastructure exploits against routers, the recently-publicized exploits against Windows Vista/Windows 7 gadgets (which were well known and described in Microsoft’s own gadget security whitepaper; in short, it’s easy for a developer to write a vulnerable gadget, so most of them do so), and persistent injection of browser exploits during a MitM session (e.g. on open WiFi.)

However, there was one other talk I’d like to go into some detail on — iSec Partners presented The Myth of Twelve More Bytes around the impact of IPv6, DNSSEC, and the new commercial gTLDs (the top-level domain identifiers like .com and .net) being issued by ICANN. In short, these changes remove artificial scarcity from the Internet in a variety of ways that are not broadly understood, and this is fundamentally changing the architecture of the network because assumptions of scarcity are deeply woven into our current designs.

IPv6 is not just changing IP addresses from 4 bytes to 16. It makes substantial adjustments to the layer 2 through 4 network stacks, removing ARP while modifying TCP, UDP, ICMP, and DHCP and adding new IP protocols like SLAAC (Stateless Address Auto-Configuration.)

ICMP becomes critical infrastructure — you can’t blindly drop it or SLAAC, router discovery, and neighbor discovery stop working. Yet those are unauthenticated protocols, spoofable by anyone on the segment; duplicate address detection is also impractical since it can be spoofed for a DoS. SLAAC eliminates the need for DHCP (though DHCPv6 does exist for procuring additional addresses) by allowing clients to just give themselves an IP with the static SLAAC prefix, ask for a network identifier, and append their own MAC address to get a globally-unique IP. There is also a protocol (RFC4941, Privacy Addresses) for generating additional random addresses since for obvious reasons not everyone wants to have a permanent, immutable, Internet-routeable globally unique IP (it would sure make ad networks’ and trackers’ jobs easier.)

If you’ve not disabled IPv6 entirely on your Windows machines, the fact that you may not be rolling out IPv6 in your network yet may not protect you from IPv6 attacks. SLAAC is the default on Windows, so your Windows machines all have IPv6 addresses already, and the IP stack in Windows 7 and beyond prefers v6 over v4 — your Windows machines are just waiting for something to come along and talk IPv6 to them, and it doesn’t have to be you. Likewise, there are many IPv6-over-IPv4 transition mechanisms, like 6to4, 6rd, ISATAP, and Teredo, and while these can be blocked, you have to do so proactively — they just work otherwise, on your existing hardware and software. Your Windows servers may be speaking to a hacker on the Internet via Teredo right now without your knowledge.

Other interesting implications of IPv6: since unlimited extension headers are allowed, TCP packets may be fragmented before the TCP header — you can’t do stateless filtering on port number! And when it comes to stateful filtering, consider that you can’t keep a full state table (the number of possible source addresses to keep state on is unfathomably large) and attackers can send every packet from a brand new, totally valid IP address — these aren’t spoofed addresses, they can make full connections. Stateful filtering that’s not subject to trivial DoS is going to be a challenge.

The new top-level domain process also causes its share of problems by breaking the Internet’s trust model. And there are going to be a lot of new top-level domains — despite the $185,000 application fee for a top-level domain, Amazon has applied for 76 and Google for 121, and that’s just for “brand” domains they want to reserve entirely for themselves and not run as a registry. (i.e. Google has decided that all “.search” sites should be Google sites; likewise, “.shoes” or “.clothes” domains will all be Amazon’s.) While the process for applying for a gTLD is labyrinthine, there’s no step of the process that tries to judge whether or not issuing a domain is a good idea.

Currently we assume that a trademarked domain likely belongs to its most recognizable holder — paypal.com goes to PayPal. But who does paypal.rugby go to? Paypal.shoes (if it were to exist) goes to Amazon. With 1400 new gTLDs, many of which will be run as public registrars, this assumption no longer holds — “defensive registration” becomes impossible because you can’t even identify all the registries. We assume that IP spoofing is highly limited in full-connection situations, but under IPv6 I don’t need IP spoofing to create unlimited connections. We assume that an individual is highly attached to a few IP addresses (their home, work, proxies, VPNs, etc.) which will no longer hold true. How do we handle browser homeographs when they can exist in 1400 TLDs?

How do we do anti-fraud and adaptive authentication without IP scarcity? What about DDoS prevention, rate limiting, IDS, SIEM, event correlation — even load balancing? How do we do IP reputation? Moving it up to the network level is a problem since one bad actor could DoS their entire network provider. Right now, if you advertise an IPv4 space you don’t own, the people who do own it will notice and complain — but in IPv6, if you own an AS — any AS at all — you can create IP ranges for dedicated tasks, advertise them, then tear them down and leave little evidence they ever existed at all.

For all my complaining about the changing nature of BlackHat (the vendor floor is really the center of the conference now), it certainly brought to my attention some issues that are not yet “common knowledge” in the information security world. We’re already past time to start preparing for these things, as attackers already are.

attacks, crypto, industry, networks, privacy, products, society

DefCon 19, Day 3

Sunday was interesting — this was actually the first DefCon I have attended (and I’ve been to the last five) where Sunday was actually busy. Normally Sunday feels very empty — most people have gone home, and the ones that are still around are too hung over to go to the morning sessions. I was not quite hung over enough to miss the morning sessions, so off I went. I’d imagine a lot of people took advantage of DefCon TV, though.

I started the day with Whit Diffie & Moxie Marlinspike’s Q&A session in Track 1. There was no topic in the program; instead, they just both answered questions about SSL and cryptography. One interesting detail: one of the reasons RSA has become more successful (or at least frequently used) than Diffie-Hellman was that Diffie himself favored it, on account of certain attacks for which RSA is more favorable (though Diffie-Hellman is better against others.) A lot of the discussion, though, was about Moxie’s notary system proposal. I have to give Moxie credit here — though I’m still not sure that I agree with his proposal, I probably spent more time debating it with people than I spent talking about any other presentation this weekend. It certainly spawned a lot of conversation.

Paul Craig’s iKAT tool is always interesting, and he presented a new version. The previous one only attacked Windows kiosks, and now he’s cross-platform. Essentially, the principle is that Internet kiosks are designed with the threat model of defending the kiosk from the user… and not defending it from the Internet. Thus, iKAT is an Internet site that can be used by the user to attack his own machine, under the assumption that his own machine is some sort of locked-down Internet kiosk with restricted permissions. iKAT allows the user to take full administrative control of most of them, either just to get unrestricted Internet orb, if he’s less friendly, to Trojan the card-reader.

Next, Alva Duckwall presented A Bridge Too Far, a talk on bypassing 802.1x via creating a layer-2 transparent bridge. This was actually a rather cool talk, and coupled very well with yesterday’s talk on exploiting hotel VoIP via VLAN-hopping by cloning the phone. With all the focus being on Layer-3 protocols these days, it’s cool to see that you can still do some interesting stuff at Layer-2.

There was a talk in the afternoon on bit-squatting — essentially, a binary version of typosquatting wherein you register a domain that’s a 1-bit error off from a legitimate domain, not intending to catch user error but rather to catch hardware and network errors. 1-bit errors are fairly common, at least when multiplied by billions of Internet users. I didn’t attend the talk because I felt that all the interesting material was basically contained in the title — the moral of the story is going to be that you should probably register the 1-bit-off domain names of your own if you’re going to create a highly-targeted site like a banking site. Talking to people who did attend… the consensus was that it shouldn’t have been a 50-minute talk.

Instead, I visited datagram’s talk on tamper-evident devices. Most of them, well, aren’t tamper-evident, at least not against a skilled attacker. The attacks range from very obvious (stretching plastic, razoring up adhesive) to requiring more knowledge (dissolving adhesive with a wide variety of organic and inorganic solvents) to very clever. Note that during the Tamper Evident contest at DefCon, wherein people tried to bypass a wide variety of anti-tampering seals and devices… none of the seals or devices successfully resisted attack.

I followed this up with a talk by the DefCon NOC on Building the DefCon network. It’s an interesting challenge — building a high-bandwidth network, wired and wireless, for use by 12,000 people, many of whom will be actively attacking it, given only 3 days, using only hardware you can afford to keep in a box 51 weeks of the year. Considering their constraints they do a remarkable job. This year’s secure wireless was, so far as anyone could tell, actually secure… and possibly safer than using GSM or CDMA in this environment (GSM is definitely broken, and the not-quite-confirmed rumor is that CDMA users were hit by an 0day MitM this year, too.) DefCon TV was a huge hit, even though it did not successfully reach all rooms.

The last talk of the day was Jayson Street’s dramatically-titled “Steal Everything, Kill Everyone, Cause Total Financial Ruin!” It was sometimes amusing, but overall it was mostly a self-aggrandizing pentester talking about various (mostly physical) exploits he had pulled off. Not really any valuable content for a security pro, though your average non-security person would probably be shocked at how trivially exploitable most systems are.

Having spent pretty much the whole weekend at DefCon events, I decided to go back down to the Strip, see a show, and have some delicious steak frites and wine at the Paris. It was a nice ending to a packed weekend.

Overall, DefCon this weekend was a huge success (I’m making a note here.) The Rio was a great environment, much better than the Riviera, with enough room to grow and real food to eat. Staying in the conference hotel and having a group to enjoy DefCon with made it a much more fun experience than past years; both will be things I’ll be sure to repeat. (Incidentally, Google Plus is a great tool for attending a con with a group — it’s like having your own private Twitter — though I can’t say that I have found much else it’s good for yet.) Speaking of Twitter, while it’s been indefensible for DefCon in prior years, at this point since everyone has a smartphone and a Twitter account the #defcon hashtag actually has so much traffic it’s almost impossible to keep track of. Every time you bring it up there are hundreds of new tweets.

I think the new non-electronic badges were a success. While perhaps less “cool” than the electronic ones, far more people participated in the badge contest this year than have ever participated in hacking the electronic badges, and while badge lines did run 2-3 hours, at least they were available before the con started. At some point, DefCon management needs to learn that the conference is growing 10%+ per year and that they need to order enough badges for growth; considering the much lower cost of non-electronic badges, perhaps they’ll do that next year. The lines are entirely unnecessary — they exist only because everybody knows that badges have been under-ordered and people at the back of the line won’t get one. Without this pressure to get badges first, the infamous LineCon could be avoided.

DC303 and Rapid7 threw great parties. However, most of the fun I had was around the Rio pools — having them open until 2am was great, though even later would be nice (and allowing alcohol instead of having everyone smuggle it in would be an improvement, though I’m not holding my breath on that one.) Finally, thanks to DC206 for a great time, a lot of very interesting conversation, and confusing the hell out of taxi drivers.

attacks, hardware, networks, physical security, products