DEFCON 23: The Only Way to Be Sure: Obtaining and Detecting Domain Persistence

I presented a talk at the DEF CON 101 track of DEF CON 23 this year; for those of you who have been directed to the site from the talk, you can find the slides on this site here: DEF CON 23: The Only Way to Be Sure: Obtaining and Detecting Domain Persistence

Note that as the slides are mostly video demos, the deck is quite large and is only available in PowerPoint format.

attacks, mitigations, risk

DEFCON 22: Detecting Bluetooth Surveillance Systems

For anyone looking for my talks at DEF CON 22 and Thotcon 0x6 on the topic of detecting Bluetooth surveillance systems, the DEF CON slide deck is available for download here, or in PDF. The (abbreviated) Thotcon version is here.

Finally, you can stream or download the full DEF CON presentation video with slides from the DEF CON media server.

Uncategorized

Fingerprint Login and Authentication

With Apple’s introduction of Touch ID for the new iPhone 5S, there’s been a lot of news coverage of their new fingerprint-based unlock system — and not just about its usefulness for cats. People want to know: is it secure? Can someone bypass it? Within moments of its release there was already a sizeable bounty being offered to someone who could “break” Touch ID. Of course, the Chaos Computer Club demonstrated a bypass in under a week.

But the thing about fingerprints is that they’ve been easy to bypass for more than 20 years. It’s not that hackers have figured it out “already”, rather spies figured it out decades ago. You dust the fingerprint, photograph the pattern, print it out with an impact printer (or, in a pinch, a laser printer with the toner on the heaviest setting to leave raised printing), pour plain old Elmer’s glue on it, let the glue dry until firm but not quite solid, and peel it off. Presto! Prosthetic fingerprint.

The problem with how fingerprints are being used is that fingerprints are a form of identification, not authentication. They quickly say who you are, but they don’t prove who you are — essentially, when trying to translate the traditional username/password paradigm to biometrics, a fingerprint is like a username, not like a password. Unfortunately, it’s being used as a password. It’s especially funny on the new iPhone because they’re using fingerprints to authenticate to a touchscreen device — that is, an object that has your fingerprints all over it! If someone wanted into such a phone it would be really easy to lift the user’s fingerprints off the screen, create a prosthetic, and unlock the device with the fingerprint reader. You can’t make a secure authentication method out of something that people leave everywhere.

On the other hand, I can’t bring myself to care that much. There’s a general rule in computer security: “If the adversary has unrestricted physical access to your computer, it’s not your computer.” If someone’s trying to bypass fingerprint lock on a phone, then they must have possession of the phone — and in that case there are many ways in, whether it’s locked with a fingerprint, a PIN, a password, or whatever. Fingerprint is more convenient than PIN and probably approximately as secure as a PIN. In either case, if the device storage isn’t encrypted getting access to it is trivial, and if it is encrypted the capability to perform an offline attack (a capability you have in a stolen-device scenario) means that bypassing a 4-digit PIN is equally trivial. You’re not really losing much, if any, security by going to a fingerprint.

The other problem with fingerprints as passwords — aside from the fact that you leave them everywhere — is that your fingerprint can’t be rotated. If your password gets stolen, you can change your password, but if your fingerprint is stolen, it’s stolen forever. There’s no way for you to change it. This is fine for an identifier (username), but not fine for an authenticator (password) — it puts you in the situation of “break once, break everywhere.” Once your fingerprint has been stolen by an adversary, they have it for the rest of your life. This also why fingerprints (or any biometrics) should never be used to generate cryptographic keys.

You’ll find fingerprint readers on a lot of enterprise-model laptop computers, too. On these, the fingerprint reader is just an alternate authenticator to Windows, so Windows will still let you log in with your password if the fingerprint reader doesn’t work. It does (by design) reduce your security a bit — but once again, not much, because if someone is trying to break in via the fingerprint reader then they must have physical possession of your computer, and they’re going to get in anyway. The only protection against that is to enable BitLocker in PIN mode — that is, full-disk encryption with a PIN code required at power-on to decrypt the hard disk, and even then you’re only really safe if your computer comes with a TPM (which most business laptops do, but most other PCs do not.) Most people don’t do this, which means fingerprint or password, your data is easily accessible to someone who has possession of your PC.

So all told, there’s not much reason not to use fingerprint unlock on a phone, since phone unlock is not normally a boundary where we expect much security (as our usual mechanisms — either “swipe to unlock” or a 4-digit PIN code with unlimited guesses allowed — provide very little security anyway.) But from a systems design perspective, if you want real security, fingerprint should not be treated as an authenticator, regardless of the technology being employed.

authentication, hardware, industry, risk

The Blade Itself Incites to Violence

First we find out Verizon has been essentially running a pen register on its entire customer base for three months, under a FISA court order. Then we find out it was a renewal – given that the FISA court has approved some 38,000 warrants and denied only around 130, I don’t believe there’s any reason not to believe that the FISA court approves a pen register on every US phone company every three months.

And then Edward Snowden turns the NSA’s terrible PowerPoint slides (seriously, could they put any more flag and eagle clip art in there if they tried?) over to the Guardian, and it looks like PRISM has direct access to every record of customer data in ten major Internet service companies. Quickly PRISM overtakes the Verizon scandal in attention.

What are we to make of this? A tempest in a teapot, or that the United States has already gone over the edge into a police state? The mainstream media certainly promulgates both views — and Congress has given them plenty of ammunition to do so, with Snowden called whistleblower, hero, criminal, or traitor depending on who’s giving the sound bite.

Of course, all the major Internet companies — Microsoft, Google, Facebook, etc. — have claimed to have no knowledge of PRISM, and not to be party to any worldwide NSA-led spy ring. As someone who works in security at a major Internet company, frankly, I believe them. Which is to say that I believe that spokesperson has no knowledge of PRISM and genuinely believes his employer is not party to any worldwide NSA-led spy ring. But these companies have criminal compliance teams — groups whose role is to liase with law enforcement around the world, and to determine which requests, subpoenas, and warrants to quietly obey and which to resist. These criminal compliance teams operate in secret, necessarily — it’s often outright illegal for them to share the requests they receive (the USA PATRIOT Act’s National Security Letters come with gag orders attached), and even if it’s not, it’s bad practice. Most of the time they’re assisting in the investigation of bona fide bad people, child pornographers and fugitive murderers and the like, and talking too much jeopardizes the investigation. Criminal compliance people are law enforcement people — they’re Lawful Good, they believe in what they’re doing, and generally rightfully so. They may care passionately about civil liberties, and they may push back on overreaching requests, but ultimately they believe in the power of government to do good, just as legislators do, or they wouldn’t be in that career — and that career requires a culture of secrecy. They don’t talk, and their managers don’t ask, because that’s their job. So the spokespeople at Microsoft and Google and Facebook and so on are telling the truth — they’ve never heard of PRISM, they don’t know about any NSA spy ring. And yet that means very little; they wouldn’t have heard of it, and they wouldn’t know about it, and the people who do won’t say. It’s their job not to say, and the great majority of the time, we as a society should be glad they’re doing their job. They put people like this guy in jail.

PRISM is probably not a spying system per se. It’s a glorified reporting layer — it presents to intelligence agents in usable form the intelligence the NSA has already collected, and allows them to easily request more. Those requests go through the usual due process, getting sent to some Internet company with an order from the FISA court. PRISM probably isn’t directly tied into the core systems of the Internet’s largest companies… but it indirectly is, by way of any number of other applications and processes, both technical and legal. Maybe even those criminal compliance teams have never heard of PRISM… they’ve heard of a few National Security Letters, and a few dozen warrants, and a few hundred subpoenas, and each one alone made sense, yet all of the data from all of them went into the NSA’s great oracle, and the whole is greater than the sum of its parts.

I will give the administration one thing: there’s no evidence that the data from PRISM is being abused. PRISM knows about your Google searches, it knows about your email’s contents, it knows all the little felonies and misdemeanors you’ve committed. And make no mistake, you have committed them: our legal code has become so labyrinthine, everyone is a felon — as Cardinal Richelieu said, “if you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him.” When even copyright infringement is a criminal offense, when the Computer Fraud and Abuse Act makes violating website terms of service (you haven’t read them) a felony, a prosecutor with the will and the political support can prosecute anyone. Yet… they don’t. The NSA isn’t turning over everyone’s drug purchases and porn habits and music downloads to local district attorneys — it doesn’t look like they’re turning over anyone’s. They’re using it to look for terrorists, because that’s their charter, and nothing more. Obama’s not lying when he says the program has thorough oversight and is carefully targeted.

Allow me to take a digression here. The Transportation Security Agency was established to secure the nation’s transportation system against terrorism. Their charter is very clear: strengthen the security of the nationís transportation systems and ensure the freedom of movement for people and commerce. The TSA’s charter, notably, is not to wipe out drug trafficking, or to prevent smuggling, or to enforce customs laws, or to prevent illegal immigration. And thus it does not try to do these things: its rules are all regarding weapons and explosives, not drugs or contraband (those drug-sniffing dogs are CBP, not TSA), and its security measures aimed at that target. Sometimes they may be ridiculous — X-ray scanners that can’t detect objects placed at your sides, say, or the constant “preparing for the last war” of shoe removal and liquid bans — but their aim is clear even if the shots are wild. Thus, it was no surprise that they recently decided to stop screening for small knives, golf clubs, multi-tools, and other minor weaponry. These items are no threat to the security of an aircraft — any weapon that can’t threaten more than one person at a time isn’t. No one with a knife is going to get through a cockpit door, and even if they take a hostage they’re not likely to kill more than one person — tragic for that one person, to be sure, but no threat to the aircraft, much less the transportation system. The TSA wanted to focus on threats to the aircraft — bombs, guns, and the like.

Yet the flight attendant’s union objected (naturally — they’re the ones who will get stabbed with those small knives, hit with those golf clubs, etc. and the safety of the transportation system is, to them, little consolation), and some opportunistic members of Congress latched on and threw a fit. How dare the TSA not stop an obvious threat? It didn’t matter that it’s not the TSA’s mission to stop that threat. The TSA is beholden to Congress, Congress is driven by public opinion, public opinion is driven by the media, and the media is driven by fear, because fear gets ratings. Fear sells, so it owns the media, which owns the public, which owns Congress. So now the TSA has backed off from their threat — they can stop a drunkard with a pocket knife, so they must stop a drunkard with a pocket knife. Never mind that it’s not their charter, that it has nothing to do with the safety of the transportation system, that it’s unrelated to terrorism or homeland security.

Maybe Obama’s right — maybe PRISM isn’t really a threat, just a reporting system, and maybe the NSA, despite the fact that a random analyst “sitting at my desk certainly had the authorities to wiretap anyone from you or your accountant to a Federal judge to even the President” isn’t abusing that power. Like the criminal compliance employees at major Internet companies, people working for the NSA are by and large loyal American citizens who perform their role because they believe in it, and because they know they’re doing good for their country. They swear an oath to uphold the Constitution, and that includes the Fourth Amendment. In any case, NSA surveillance is absolutely inadmissible in court for domestic crimes; FISA orders are only valid for, as the name implies, foreign intelligence.

But what happens when the media turns its attention to something other than terrorism? What happens when public opinion gets incited against something else — something evil, of course, but nevertheless something outside the NSA’s purview? What happens when the public’s fear turns from terrorism to human trafficking, or child abduction, or illegal immigration, or foreign cyber-attacks, or “hackers,” or corrupt bankers? The NSA has the evidence to catch these people — Congress will demand action. We have a hundred thousand spies now: they have the capability, they have the information. The law will change; maybe not now, maybe not for a decade, but if don’t strangle this right now, it will change. Even if every word the President and General Alexander says is true, it cannot remain true as long as these capabilities continue to exist and grow — we know exactly where this road leads. They can do it, so they must: as Homer said, the blade itself incites to violence.

legal, privacy, society, terrorism

South Carolina Hack Attack Root Causes

Recently, the South Carolina Department of Revenue was hacked, losing tax records on 3.6 million people — that is, most of South Carolina’s population. These contained Social Security numbers at the very least, as well as 3.3 million bank account numbers, and may have been full tax returns (they haven’t said.)

There’s been the usual casting of blame after such an incident, but it’s quite interesting to read over the incident response report they had Mandiant prepare for them. Despite being “PCI-Compliant”, they had a number of vulnerabilities that let the hackers break in. But what could they really have done to protect themselves? From the report, the attacker went through 16 steps:

1. August 13, 2012: A malicious (phishing) email was sent to multiple Department of Revenue employees. At least one Department of Revenue user clicked on the embedded link, unwittingly executed malware, and became compromised. The malware likely stole the userís username and password. This theory is based on other facts discovered during the investigation; however, Mandiant was unable to conclusively determine if this is how the userís credentials were obtained by the attacker.

It’s not clear here if this was untargeted spam phishing with off-the-shelf malware, or a spear-phishing attack on the DOR with custom malware. If it’s the former, then this would have been prevented by any decent mail security product (to block spam and phishing) and desktop anti-malware software with current signatures & centralized monitoring. Since I would think any “PCI-Compliant” institution would have this, my guess is that this was a spear-phishing attack. The unfortunate fact is that there’s basically nothing you can do about spear-phishing and targeted malware; by its nature it evades automated detection, and security awareness training is of limited effectiveness against a phishing mail customized for your employees. So far there’s no sign that the state DOR screwed up here.

2. August 27, 2012: The attacker logged into the remote access service (Citrix) using legitimate Department of Revenue user credentials. The credentials used belonged to one of the users who had received and opened the malicious email on August 13, 2012. The attacker used the Citrix portal to log into the userís workstation and then leveraged the userís access rights to access other Department of Revenue systems and databases with the userís credentials.

And right here in step 2 I think we’ve found the root cause of the attack. They had an external remote access service that allowed single-factor login — coming in through the perimeter from the Internet using only a password. Given that spear-phishing & targeted malware are not preventable, you have to assume that passwords will be stolen and have barriers in place to keep password-bearing attackers out; two-factor auth on remote access services should be a bare minimum, whether that’s SecurID tokens, smart cards, or other mechanisms.

3. August 29, 2012: The attacker executed utilities designed to obtain user account passwords on six servers.

Dumping the LSA secrets requires administrative privileges. It’s possible the credentials the attacker acquired in step 1 were administrative on some servers, in which case there’s no new exploit here. But if they weren’t, the attacker elevated privileges in some way, implying that the DOR might have had a patch-management problem. Once again, though, it’s not clear that there’s much they can do about it — patching inside of 30-60 days is actually very difficult for an enterprise of decent size, even a mature, technically competent one. If the attacker used a recent exploit, then the DOR might well have been no worse-off patching-wise than everyone else is. On the other hand, if they used something ancient, this might be another problem by the DOR. This said, with proper authentication on the remote access service, the attacker shouldn’t have even gotten this far.

4. September 1, 2012: The attacker executed a utility to obtain user account passwords for all Windows user accounts. The attacker also installed malicious software (ďbackdoorĒ) on one server.

At this point the attacker is a domain administrator; if he’s dumping “all Windows user accounts” he’s got at least a network login on the domain controller. Chances are that a domain admin had logged onto the first compromised server at some point, and thus the attacker captured his cached credentials. No new attacks or exploits here.

5. September 2, 2012: The attacker interacted with twenty one servers using a compromised account and performed reconnaissance activities. The attacker also authenticated to a web server that handled payment maintenance information for the Department of Revenue, but was not able to accomplish anything malicious.
6. September 3, 2012: The attacker interacted with eight servers using a compromised account and performed reconnaissance activities. The attacker again authenticated to a web server that handled payment maintenance information for the Department of Revenue, but was not able to accomplish anything malicious.
7. September 4, 2012: The attacker interacted with six systems using a compromised account and performed reconnaissance activities.
8. September 5 – 10, 2012: No evidence of attacker activity was identified.
9. September 11, 2012: The attacker interacted with three systems using a compromised account and performed reconnaissance activities.

Nothing interesting here. Very few enterprises could have detected the above; it would require the sort of aggressive NIDS with extensive monitoring that’s normally only found in classified environments.

10. September 12, 2012: The attacker copied database backup files to a staging directory.
11. September 13 and 14, 2012: The attacker compressed the database backup files into fourteen (of the fifteen total) encrypted 7-zip archives. The attacker then moved the 7-zip archives from the database server to another server and sent the data to a system on the Internet. The attacker then deleted the backup files and 7-zip archives.

This was a database exfiltration of over 8 gigabytes of data. This is actually one thing that NIDS could be effective against if tuned properly.

The remainder of the attack steps were just some more reconnaissance, backdoor testing, and other probes, followed by Mandiant shutting down the attacker’s entry point.

The interesting thing here is that assuming this was spear-phishing with targeted malware, the only mistakes the DOR seems to have made were insufficient IDS tuning (which is honestly usually high-effort, low-payoff security work) and having single-factor remote access (which is catastrophic.) There’s nothing in this report that makes it look like the DOR’s IT department was run by a gang of idiots (like in, say, last year’s many Sony attacks); it looks like an organization that was doing most things right but had failed to deploy two-factor remote access. I’d wager their IT security guys wanted to, too, but were blocked by either the inconvenience to users or the cost of rolling out tokens or smart cards.

Having spent more than $14 million recovering from this incident, I’d bet two-factor auth is looking pretty cheap now.

attacks, mitigations, risk