The second day of BlackHat started out with a keynote by Mudge. I attended this one despite the normally-dull nature of BlackHat keynotes, because while Mudge is a Fed now (he works for DARPA), he has a long history as a contributor to hacker culture and I wanted to hear what he had to say. He introduced a DARPA program called Cyber Fast Track (it’s not government if it doesn’t have “cyber” in the name, after all) that allows small companies and even hackerspaces to receive grants to do infosec research, without having to jump through the hoops and fill out the forms for traditional government financing, all of which are designed for huge government contractors like Lockheed Martin and are nigh-impossible for individuals and startups. I appreciate the work he’s doing, and especially the fact that accepting these grants involves giving DARPA only government-use rights and not signing over the IP for the research.
Next I went to Chris Paget’s overview of the Final Security Review for Windows Vista. Since I’m someone who’s actually done Final Security Reviews for Microsoft and is part of the team that owns the Security Development Lifecycle, there was nothing here I didn’t know. However, Chris gave a very favorable review of Microsoft, and it was clear that she really appreciated the work Microsoft does in securing their products. For all the bad press Microsoft used to get in security, Microsoft has the most mature and complete security processes in the industry, and this is a remarkable turnaround when you look at where they were in 2001. It’s good to know that even on the much-maligned Vista they gave Chris and her team full access to everything and everyone remotely relevant, and got a very good return on investment in terms of security bugs fixed.
I missed the next session to pick up my DefCon badge. In my five years of attending DefCon, they have run out of badges every time, thanks to DT underestimating attendance (each DefCon has been much bigger than the last, recessions notwithstanding.) As a result, everyone queues up early to get one, making for hours-long lines. Though this year they went for a non-electronic badge, and thus at least had them on time, they did still run out by midday Saturday. Lines were about an hour at BlackHat, and apparently ran to over two at the Rio.
In the afternoon, I dropped into Moxie Marlinspike’s SSL and the Future of Authenticity. Moxie is worried about the constant compromises of SSL Certificate Authorities — many have had bugs in them that made it possible to get real, valid certificates issued to you for other people’s domains (e.g. google.com, or your bank), thus making it possible to eavesdrop on SSL communications in a man-in-the-middle scenario. One of the most-public breaches was the attack on Comodo that resulted in many false certificates being generated for some of the most important sites on the Web. But what happened to Comodo? Nothing! The CA system has no ability to change. Browsers trust Comodo, and even if we don’t like the idea of trusting them anymore — when they have been proven untrustworthy — there’s nothing to do about it. If browser vendors dropped Comodo, 20-25% of all secure sites on the Web would stop working. Moxie proposed a new system (he demonstrated it with a Firefox plugin called Convergence) wherein the user selects trustworthy parties, called notaries, which verify certificates for him. The notary system will prevent a man-in-the-middle attack just as well as the CA system does, and if you distrust a notary you can just switch to others, and nothing breaks. The user chooses who to trust. On one hand, this does give trust agility — the ability to change who you trust — which Moxie highly values, and it does prevent man-in-the-middle attacks unless the attacker is very close (from a network-topology standpoint) to the destination host (which is unusual — in most MitM attacks, the attacker is very close to the source host, not the destination.) On the other hand, I’m not quite convinced — the system does not prove authenticity, only that no MitM is present, so it doesn’t really substitute for the CAs. However, I’d say my friends and I spent more time discussing this talk than any other at BlackHat or DefCon, so right or wrong he got us thinking, which can only be good in the long run. The CA system really is broken, and it’s untenably fragile — if one CA has its private key widely distributed, everyone will be able to make fake SSL certificates forever. And there are thousands of CAs.
I went up to IOActive’s IOAsis suite at the top of the Forum Tower in lieu of the next BlackHat session. I’m not sure what actually happened between BlackHat and IOActive this year, but for the first time since I’ve attended the conference, IOActive had no official presence at the conference (whereas before they’ve been one of the top-tier sponsors) and ran their own parallel events at Caesars instead. I had a pass to IOActive’s events as well — spend five years in infosec in the Seattle industry and it’s hard not to know half of IOActive, particularly their CEO who seems to have the remarkable ability to remember everyone she meets, instantly and forever. I went to a talk they hosted about malware tools like Spy Eye and Zeus. Overall, they’re remarkable professionally-developed tools, with high-quality tutorials and documentation. They really make being a criminal easy, and if you happen to live in a non-extradition country like Russia, it turns out crime does pay.
Finally, I went to a talk about the latest Chip & PIN exploits. I have to admit, as an American, Chip & PIN exploits always seem kind of lame. They boil down to “with this amazing exploit, we can make European credit cards almost as insecure as American ones are all the time!” The fact that if you steal a credit card you can, you know, buy stuff with it until the cardholder notices it’s gone and calls the bank just doesn’t seem like a revelation. This said, it is interesting to see some of the dubious security decisions made in this “secure” payment system, and Chip & PIN will be coming to the U.S. in the near future. The worst threat here is not technical but legal — in most European countries, the fact that a transaction happened via Chip & PIN is considered prima facie proof that you authorized the transaction and are fully liable — either that, or you were negligent with your PIN and still fully liable. The fact that it’s possible to make these transactions without a PIN makes this dangerous.
At this point, BlackHat USA 2011 was over. I headed back up to IOActive’s IOAsis suite for their post-conference reception. I not only met up with several people from IOActive, but I also happened to strike up a conversation with someone who informed me that she was with the DC206 group — the local DefCon club here in Seattle that meets at The Black Lodge about 10 miles from here. We quickly found we had several friends in common, and she introduced me to the other DC206/Black Lodge people at the party. This worked out very well, as I ended up hanging out with them for the next three days of DefCon, and had a lot of great conversations with a very interesting mix of security pros, makers, and hackers as a result. Though I’ve been by the Black Lodge and DC206 events before, I plan to make an effort to be present for more of them in the future.
We went to the Microsoft party at the Haze nightclub in Aria, primarily because given the youth of the Aria property, none of us had ever seen it before. The party itself wasn’t bad — quite good compared to last year’s event — and they had a nerdcore rapper performing (I honestly don’t remember if it was DualCore or MC Frontalot, having encountered both of them multiple times during the week.) However, we stayed only briefly then moved to the Rio, where we hung out with other DefCon attendees at the pool. The Rio was kind enough to keep the pool open until 1am (much later than normal) for DefCon attendees, and even until 2am on subsequent nights, which was quite appreciated.
I spent last week in Las Vegas, for BlackHat USA 2011 and DefCon 19 — my annual security conference pilgrimage. Overall impression: the quality of the actual presentations was below-average this year, but it was still an educational experience, a good professional networking event, and probably the most fun I’ve had at DefCon so far.
BlackHat’s had the usual (for the last few years) dull government keynote speaker (Ambassador Cofer Black this year, who said “cyber” about 100 times, as only government speakers ever do) for the first day. I spent a bit of time at a WiFi Penetration Testing Workshop, followed by a very interesting talk on Google Chrome OS. The gist of it is that in Chrome OS, since the browser is the operating system, a cross-site scripting exploit (which is very common and very easy) becomes the equivalent of administrative remote code execution on a conventional OS like Windows or MacOS. Since an XSS can call Chrome OS’s APIs, clicking one malicious link can give an attacker full access to all data for all applications on the system. While I don’t use Chrome OS (and, frankly, neither does anyone else), rumors that Windows 8 will support DHTML-based applications (like all of Chrome OS’s apps are) make me hope that the Windows 8 team is considering exploits like this.
Next was Dan Kaminsky’s talk, Black Ops of TCP/IP 2011. While it sure beat last year’s Kaminsky talk (“Hey, let’s talk about DNSSEC! By the way, did I mention I started a new company that makes DNSSEC tools?”), the description was rather misleading — he spent a third of the talk talking about BitCoins (short-short version: the BitCoin system does not scale well, and unless used verycarefully is not anonymous), then talked a bit about various sequence-number prediction vulnerabilities (well, sort-of-vulnerabilities), and showed off a tool (“nooter”) that can detect non-neutral networks (i.e. networks, like your ISP, that may be favoring some companies over others for extra cash rather than providing you a straightforward Internet connection.) The nooter tool was kind of clever, though, and it really would detect non-neutral ISPs, which is a valuable public service even if, well, not all that interesting.
I missed a talk on femtocells that I’ll have to catch on video, as it sounds interesting. Femtocells are the cell-network extension terminals you can get put in your house if you have terrible cell reception, but since this amounts to the cell phone company giving you physical control of an extension of their network, they’re apparently eminently hackable. But instead, I went to a talk on post-exploitation forensics with Metasploit. He made a module for Meterpreter that allows you, the attacker, to remotely mount a block device from a compromised victim machine. As a result, you can actually access the disk as if it were local, even to the point of using forensic imaging tools like EnCase on it. It’s slow, of course, but this brings capabilities to every hacker that… well, that the FBI and NSA have probably been doing to people for several years now.
I skipped the talk on bit-squatting, because I felt the description essentially encapsulated all there was to say about the topic. Due to quantum mechanics, thermodynamics, and other inescapable laws of physics, computers make one-bit errors pretty frequently. If you register a domain that is 1 bit off from a real domain, occasionally (very occasionally) someone who types in the real domain name perfectly fine will get sent to your domain instead. So if you are running a high-sensitivity business site, you might want to register all the valid 1-bit-off versions of your domain name, too, to keep malicious people from squatting it. It’s just typo-squatting with binary. From talking to people who went to the talk, they pretty much agreed that this could have been a 10-minute talk instead of 75.
Instead, I hit Aerial Cyber-Apocalypse. These people bought a cheap Army target drone, replaced the engine with electric, and added WiFi, GSM, and Bluetooth sniffers to it. The result: a tiny UAV, with GPS-guided autopilot, that can fly autonomously, circle an area, and eavesdrop on all the wireless networks and Bluetooth devices there, as well as hijacking nearby cell phones. Plus you can connect to the UAV via 900MHz radio and actually launch proactive attacks over the WiFi. Suddenly wireless networks inside a walled or fenced compound aren’t so safe. Though what this really made me think is “So, less than $2000 will make you a little aircraft, capable of carrying 20-50 pounds, that’s GPS guided and can take off, fly for over an hour, and land on its own on a 40-foot runway without any external control. Why exactly do drug smugglers build manned submarines instead of building these things by the dozen? 20-50 pounds of coke is not insignificant.”
Also during the day, Microsoft announced a $200,000 prize for development of the best new mitigation technology of the year. This is actually kind of neat — companies pay bug bounties all the time, but a prize not for finding something wrong but for finding a way to prevent exploits is new. They’re looking for things like StackGuard, DEP, and ASLR that have really made modern OSs much harder to exploit than older versions (well, except MacOS, which falls over if you blow on it.) On one hand, $200,000 is a lot of money, but on the other hand, you’d think someone who developed something like this would make a lot more money just starting a company to sell it instead of handing it to MS for a prize. Anticipating this, the terms of the contest say that collecting the prize gives MS the non-exclusive right to use the technology if they wish — including building a version of it into Windows if they think it appropriate — but does not sign over the IP to Microsoft. You retain ownership.
The evening’s Pwnie Awards included a well-deserved lifetime achievement award, and some very amusing award categories — all five nominees for “Most Epic Fail” were divisions of Sony, and the award for “Epic 0wnage” had nominees of Anonymous for the HBGary hack, LulzSec for hacking everyone, Bradley Manning, and Stuxnet. “Worst Vendor Response” went quite deservedly to RSA, for essentially losing the keys to the kingdom and then trying to cover it up, resulting in the Chinese breaking into Lockheed Martin.
For the evening, I went to the private Qualys reception at Yellowtail restaurant in the Bellagio and ate some sushi, while chatting with someone visiting from Germany. I then moved over to McAfee’s party atop Chateau at the Paris, where I spent a lot of time talking to security pros, as well as reminiscing about 1990s games with someone in a DOOM shirt (it said “IDDQD” and “IDKFA” on it.) Alas, I spent a little too much time there, as by the time I left to head to the WhiteHat Security/Accuvant Labs party (they had Crystal Method playing) at PURE, the club was full and they weren’t letting anyone else in, even those like me with invitations. So I took a taxi over to the Palms to drop into the Rapid7 party. Rapid7 (owners of the fantastic, indispensable, and free Metasploit tool) threw by far the best BlackHat party I’ve ever been to — normally these are fairly dull events (95% male, mostly standing around trying to talk over the music), but this was an actual party — I mean, people were actually dancing on the dance floor, which is unheard-of for a BlackHat party. Admittedly, part of what made it good was that Moon (the club on top of the Palms) is an incredible space — top of a skyscraper, roof open to the sky, balconies overlooking the Strip and the city on all sides, multiple levels so that there was both a “loud” area and a “quiet” (relatively) area so that both talkers & partiers could have a good time, etc. Still, it was a good time and pretty impressive for a vendor party. And thus ended Day 1.
With the news that the raid on Osama bin Laden’s compound resulted in the capture of at least 10 hard drives and over 100 miscellaneous data storage devices (CDs, DVDs, flash drives, floppy disks, etc.), a common question that’s come up on news sites is “So, how likely are we to be able to decrypt these things? How good is the best non-government-grade encryption, anyway?”
Pretty good. The actual algorithm used is generally AES-256, which is so far as anyone knows unbreakable. The only known way to bypass it is by guessing the key, and guessing a 256-bit key is computationally infeasible. Imagine the NSA has a computer that can break 56-bit DES — the standard government code of a decade ago — in a single second. If they had a billion of those computers (vastly more than they do, even though the NSA has acres of supercomputers), it would still take 5×1042 years to crack a single AES-256 key — that’s a billion billion billion billion times the age of the Universe. It cannot be done.
But here’s the good news for people trying to break into Osama bin Laden’s hard drives — they probably don’t need to crack AES-256. Implementing a crypto algorithm is really the easy part of cryptography — the hard part is key management. How do you keep track of the key (which is basically a 77-digit number) and make it usable by people? There are a variety of potential weaknesses:
1.) Crypto software often has bugs or environmental factors that leak keys. AES may be unbreakable, and software like TrueCrypt and PGP implement AES, but is their actual implementation perfect? It may not be — there may be bugs in the software that make extracting the key possible.
2.) Software doesn’t run in a vacuum. For instance, when running software on Windows, segments of code and data not in use are swapped out to disk. If the crypto key happened to be in memory and was swapped out, that key might remain on the disk for quite some time. A skilled attacker using forensics software might be able to obtain some or even all of the key this way.
3.) Because no one can remember a 77-digit number, generally not only is the data on a disk encrypted, but the key itself is encrypted with a password and stored next to the data. Unless the password is 50+ characters long, it’s actually a lot easier to try every possible password than it is to try every possible key. And short passwords (<12 digits to those of us in the civilian world, maybe up to 15-16 for the NSA) can be cracked instantly using a rainbow table. What's more, people re-use passwords -- if the same password as is used for the crypto software is also used to log into the PC, or into some web sites, or for multiple kinds of encryption, etc., it may be possible to attack some other, weaker system for the password and then use it to decrypt the key.
The NSA probably has key-extraction scripts already written and ready to go for hundreds of kinds of crypto software, operating systems, etc. to prevent them from having to do the comparatively very hard task of cryptanalysis.
With Osama bin Laden in particular, they may have another advantage -- due to the fear of CIA/NSA "back doors" in American and European cryptography products, there has been a tendency in Islamist movements to write their own cryptography software. Ironically, the back doors probably don't exist -- but writing your own cryptography software is almost always a recipe for disaster. The problem is that anybody can write a security system so strong that they can’t figure out how to break it, and many times they mistakenly assume that means nobody can figure out how to break it. Almost everybody gets cryptography wrong the first few times they try to implement it; if bin Laden were using some sort of “homebrew” crypto that hasn’t been peer-reviewed by a few dozen cryptanalysts, it almost certainly has a key-leaking bug in it somewhere.
Overall, despite that consumer-grade encryption is actually very strong and computationally infeasible to break, it is extremely likely that the NSA will be able to bypass whatever crypto Osama bin Laden used on his hard drives — if, indeed, he used any at all. They just won’t do it by attacking the crypto.
The mainstream press is full of articles telling you how to use secure passwords, like this one in MSNBC or this one in TechNewsDaily. They echo the traditional wisdom on password security — use a long password, put numbers and symbols and multiple cases in it, and don’t record it anywhere.
Well, I suppose there’s nothing wrong with that, but it’s usually not very useful. Let’s look at the advice in the second article above:
1.) Don’t be cute
Okay, they have a good point here. Using a password like 123456, qwerty, password, secret, etc. actually will get your password hacked. If your password is subject to a dictionary attack, it genuinely is very easy to get into your account. Keep in mind that a “dictionary” doesn’t mean the Merriam-Webster one, though — it means a wordlist of common passwords, so things like 123456 and major historical dates and most proper names are in the dictionary. Don’t use them.
2.) Longer is better.
3.) Use the shift key.
4.) Comic book cussing is good.
These three are sort of true, but usually aren’t useful. Assuming all lower-case letters, there are 308 million possible 6-character passwords, yet 208 billion 8-character ones. Numbers, case, and symbols turn that 208 billion to 722 trillion. But for passwords on web sites, it’s irrelevant! To crack a website password, the attacker has to send each guess to the server. The proper solution here isn’t longer passwords for users — it’s password lockout. If after 3 wrong passwords, you’re required to wait just 5 minutes before you can try again, even that all-lower-case-letters 6-character password will require an average of 655 years to crack. Password lockout makes brute-force hopeless — so all your password has to be is something not in the dictionary (for hacker values of “dictionary”). More secure sites like banks could implement progressive lockout — say, after being locked out for 5 minutes three times without a correct password, disabling the account entirely and requiring you to call or otherwise verify your identity.
The one place this is true, however, is for passwords protecting or being used as cryptographic keys. If you have an encrypted file, you want the password to be long and complex, because someone who has the encrypted file can try all the passwords he wants as fast as he wants. There’s no server to lock him up — he’s doing the cracking on his own machine! But for web site passwords, it just doesn’t matter at all.
5. Keep it centered.
This is just plain silly. It’s not remotely true that “nearly all” passwords are stored with the last character in clear; in fact, most aren’t stored at all, using a hash check instead. This is a particular flaw in one specific password storage routine. There have been others — for instance, the old NT LANMAN hashes were split such that a password could be broken into 7-character chunks and each cracked individually, so passwords of 8-13 characters were actually easier to crack in some cases than 7-character ones. Must we always figure out exactly what password-storage routines every app and website uses, and craft passwords to match? Of course not.
6.) Keep it fast, keep it mental.
If it’s your ATM PIN, you may have to worry about shoulder surfing. Likewise if you work for the CIA and there are spies everywhere. But passwords you use at home? Probably not a big concern. And what about writing down passwords — why not do it? If the password record is stored in your house, someone would have to burgle you to get it, which is (hopefully) pretty unlikely. Now, writing it down in a place proximate to attack is a bad idea, of course — putting your work password on a post-it on your workplace desk, for instance, or writing down your banking & credit card passwords on a paper in your wallet (right next to the credit and debit cards that identify which banks you use and the ID that shows your name…) is a recipe for getting hacked. Putting a password list into a dedicated device is very secure, albeit excessive for most people.
7.) Remain paranoid.
8.) Don’t double up.
Password rotation and avoiding reuse are actually the best recommendations on the list. For websites, a simple 6- or 7-letter password you change every 6-12 months and don’t recycle is probably a great deal more secure than setting your password to &*Q}}@#$7-=[\?~^.
It's also very hard to remember to do.
9.) Loose lips sink ships.
This isn't really related to password selection like the others, but yeah, don't tell other people your passwords unless you're entirely comfortable with them being you. If it's your spouse, fine, but sharing passwords among semi-trusted groups like coworkers is a bad idea, and giving it to anyone on the phone who claims to need it is a terrible one. (One of the most famous hacks of AT&T's COSMOS billing system back in the 80's came from someone simply calling an operator and saying "Hi, this is Ken [the name of the company CEO at the time]. What’s the root password?”)
10.) Donâ€™t turn your back on your computer.
Oh, come on, this is why we have screen savers.
If I were to come up with a list of password security advice, it would look like this:
1.) Don’t use dictionary words, people’s names, or anything you think might be a common password. Make up something unique.
2.) If the password is to something important — like your bank account — change it every few months.
3.) Never use the same password for important things as you use for frivolous websites.
And that would be about it. Short enough to remember.
I’ve just returned from a trip to BlackHat Briefings USA 2010 and DefCon 18. As always, it was an enjoyable week in Las Vegas learning about the latest research, networking with the surprisingly small world of security professionals, and generally having fun hanging out with a lot of interesting people with the hacker mindset.
BlackHat started out with a keynote from Jane Holl Lute, Deputy Secretary of Homeland Security. She gave the sort of banal, predictable speech we expect from a political appointee — the country needs a secure homeland, dynamic economy, and the rule of law. “Cyberspace” isn’t a warzone, because wars happen somewhere, kill people, are lawless, and “cyberspace” isn’t like this. (The one sure sign you’re listening to a government official is the constant use of the prefix “cyber-”. An even more sure sign is the use of “cyber” as a noun by itself, which so far as I can tell is done only by feds.)
She states that the five essential missions of DHS are to prevent terrorist attack, secure borders (while expediting trade & travel), enforce immigration laws, ensure the safety & security of “cyberspace,” and help build a resilient society. While I really like the emphasis on resilience in her rhetoric, I do wish DHS had more visible efforts in that direction rather than appearing to be wholly focused on prevention. She also laments that billions have been spent in cybersecurity, but the most fundamental problems still aren’t fixed, and claims that the administration wants to build a cybersecurity strategy and vision for the nation. I find this claim curious for two reasons: first of all, billions have been spent on physical security, too, and yet we don’t seem to have “fixed” crime and violence, so why should we expect information security to be any different? And second, DHS saying we need a “cybersecurity” strategy implies that they don’t have one.
Jeff Moss seemed far more excited about this talk than its content warranted. Simple politeness to a speaker, or the effect of his presence on the Homeland Security Advisory Council? Also, during Q&A one person asked her why, given that the TSA is the laughingstock of the world, we should expect DHS to do any better with the Internet. (While the question is admittedly a cheap shot and not an actual argument, her response — which was to say that the TSA is just fine and not mocked throughout the world at all — did not exactly inspire confidence either.)
My first session after the keynote was called Base Jumping, by the Grugq. This was one of two major talks about cell phone hacking on GSM this year. The GSM protocol specification runs dozens of documents and thousands of pages, but according to the Grugq, the important one is GSM 04 08, which defines layer 3.
GSM is based on TDMA (Time Division Multiple Access,) so decoding is based on time — the clock in a phone must be synced with the clock in the base station. Only a tiny amount of data is sent per timeslot. There are only 23 bytes in a timeslot, so you can do a complete exhaustion fuzzing in 3 days (and he did.)
Communication is done over a variety of named channels. BCCH (broadcast control channel) is how a base station sends out its information messages. PCH (paging channel) announces incoming SMS or phone calls. RACH (random access channel) is used by the phone to request a channel, which it gets back over AGCH (access granted channel.) Opening a channel is slow – it takes 2-3 seconds. Since it’s based on timeslots, can take quite a while for the base station to have an open slot of the appropriate channel to reply in.
Collisions are frequent since channel number is just 25 bits, and some cheap phones actually hardcode a list of random numbers instead of generating them (apparently generating a 25-bit number is just too hard for them.)
Police sometimes use IMSI catchers, which impersonate the network and make the phones all hand over their IMSI (International Mobile Subscriber Identifier — your ID off your SIM card that tells the phone company who you are.) The protocol is flawed — the phone authenticates with the network, but the network does not authenticate to the phone, and thus can be impersonated.
A German group built an open-source baseband for a common, cheap cell phone (the Motorola C118 or C123, about 5 Euro on eBay.). This can then be hacked to send arbitrary GSM traffic. Among the Grugq’s apps were:
RACHell: request channel allocation, then flood the base station with requests. This will DoS the entire cell by using all the channels. A cell can only hold about 1000 users. Since the cell is backed up to a base station controller (BSC), this attack may take down the BSC as well (which shuts down the whole tower for half a day.)
IMSI Flood: send IMSI ATTACH messages, indicating a user coming online. These are sent pre-authentication, and if you send too many random numbers as IMSIs, it can overwhelm the HLR/VLR infrastructure (the database that tells which tower has which phones attached to it) and takes down the whole network. This could also be used to make police IMSI catchers pretty much useless. I got the idea that the Grugq had not actually tested this, since taking down a cell network might get a little unwanted attention.
IMSI DETACH: When phones are turned off, they tell the network they’re no longer available via sending a single unauthenticated frame. If you have someone’s IMSI (which you can look up by phone number for $0.006,) you can send one for someone else, which disables that phone from receiving calls or SMS and cuts off any in-progress phone calls. The victim can still make new calls, however, which will reattach them to the network — but if you’re sending DETACHes every 5 seconds, this will do little good.
Baseband fuzzing: fuzzing the baseband (the radio in individual phones) by impersonating the tower pretty much causes every phone available to crash. However, lacking the code for the basebands, the Grugq didn’t find any remote exploits here. However, the overall point is that GSM is no longer a walled garden — anyone can send GSM traffic with minimal equipment now, and protocol security is required.
The next session I attended was More Bugs in More Places, by David Kane-Parry of Leviathan Security. This was an overview of the SDKs and security models for Android, Windows Phone 7, BlackBerry, and iPhone. There was nothing particularly new here, nor did he come to any conclusion as to the superiority or inferiority of any one of the platforms, so I’m not going to go into details.
The next talk was Barnaby Jack of IOActive with the wildly popular topic of jackpotting ATMs.
Current ATM attacks are mostly skimmers, physical theft, Ram raids (dragging the ATM away with a truck,) card trapping and shoulder surfing PINs, or frontal attack via safe cutting or even explosives. Barnaby Jack wanted to instead attack the software. Most new model ATMs are Windows CE based, with an ARM/Xscale processor, remote connection via TCP/IP or dial-up, with SSL support and a Triple DES encrypted PIN pad. Since the developers of Windows CE developers concerned were more concerned with protection (in the process sense) than security, this provides an opportunity.
To reverse engineer this, he bought a couple of ATMs and had them delivered to his house (which the delivery people found rather bizarre, but did.) ATMs boot directly to a proprietary ATM application. In order to get a shell, he connected a JTAG interface for full debugging access to the processor core, set a breakpoint on CreateProcess(), and replaced the target ATM executable string with explorer.exe. With explorer, he could connect a USB disk and keyboard and copy files off for offline research, make registry changes permanent (so as to always boot Explorer), create a debugging environment, then set up remote app debugging in Visual Studio.
The external attack surface is limited to the card reader, keypad, network, and motherboard inputs. This leads to two possible attack plans — remote over the network ,or a walk-up attack. It turns out the walk-up attack is quite possible, since while the cash is protected by a two-inch-thick steel safe, the motherboard is protected by a one-key-fits-all lock you can buy keys for on the Internet.
With motherboard accessible, you can access USB, SecureDigital, and CompactFlash slots. On boot, the app code checks these drives for firmware upgrades and applies them. (And there’s a reboot switch on the motherboard, too!)
From a remote perspective, ATMs support remote monitoring and configuration to allow changing splash screens, cash denominations, etc., or even do remote firmware upgrades. There are multiple levels of authentication, but Barnaby Jack found a vulnerability in this authentication process allowing for a remote authentication bypass. (He did not disclose his authentication bypass, but said he found it by fuzzing, so this work will probably be duplicated by others.)
He demonstrated two tools — one was Dillinger, a remote ATM attack and administration tool which exploits the remote authentication bypass. It’s reliable on dial-up or TCP/IP, and exchange scanning with a VoIP wardriver like WarVox is possible. Dillinger allows management of unlimited ATMs, can test remote bypass, retrieve location & master passwords, upload rootkits, and even retrieve the track data from all the cards that have been inserted into the machine.
Scrooge, an ATM rootkit, runs on the device hidden in background, activated by special key sequence or custom card. It runs on any ARM/Xscale ATM, or Intel ones with some tweaks, but must be customized for different ATM models. It has a keyboard filter that hooks the ATM keypad & side buttons — SetWindowsHook() is undocumented on CE but still works. A special key sequence (or a card whose track data spells out “GIMMEDALOOT”) launches a menu. Scrooge captures track data and pin-pad input, and can issue remote commands.
This is better seen than described. Here’s some video of remote ATM hacking with Dillinger:
And here we have the aftermath of a physical attack, where he opened the ATM with a key, stuck in a USB drive, and hit the reset button on the motherboard:
The “777 Jackpot!” on the screen and the peppy music are a nice touch.
As for how to prevent these sorts of vulnerabilities in the future, he recommends that ATM vendors offer upgrade options on the physical locks (say to at least making the key unique), implement binary signing at the kernel level to prevent unauthorized firmware upgrades, and disabling remote management on the device.
For the final presentation of the day, I attended Dan Kaminsky’s talk, which was actually not the talk described in the BlackHat documentation at all, but rather an entirely different talk on using DNSSEC to implement public key infrastructure, due to the fact that the DNSSEC root was finally signed (after only 18 years…) three weeks ago.
Dan seeks to use DNSSEC to solve a variety of problems, by creating what he calls a Domain Key Infrastructure:
- For users: when you receive an email, you can actually know for certain who it came from.
- For infrastructure buyers: we need strong authentication as much today as we did when trying (and failing) to create PKI in the past, and with DNSSEC we can actually create a working PKI. 60% of security breaches are credential-related.
- For infrastructure builders: DKI will make security products scale, and allow devices to validate the identity of peers. You can build scalable federated systems.
- For hackers and penetration testers: Dan’s new company will be actively supporting an aggressive public audit of all DNSSEC and DKI technologies.
Dan’s definitely right about one thing — we aren’t going to get security via moralizing about user education or waiting for regulation. Will have to deliver a better product as judged by the people who have to run it.
DNSSEC is simple — it works just like DNS, but referrals and authoritative records are signed. Thus, when referred elsewhere, you’re told not only where the server to ask is, but also how to recognize it. Keys can lead to other keys.
DNSsec was complex to deploy because it was designed to allow “key in a vault” security, where keys are offline and not generated on demand. When it was proposed eighteen years ago, CPUs were slow, and some installations are incredibly large (e.g. .com) Offline keying is cumbersome. However, there’s an alternative that’s relatively simple to deploy.
Phreebird is a DNSSEC server that’s simple because it uses online keysigning, just like SSL, SSH, and IPsec. There is some risk here, of course, but we seem to accept it everywhere else, as everyone keeps keys online for some protocols. Those who are really concerned about security can use a hardware security module. Phreebird works as a proxy, and has effectively nothing to configure — you change the port of the DNS server, run Phreebird, and then supply the signature to your DNS registrar. It’s presently implemented as a UDP port forwarder, but they’re rebuilding it as a Linux mangle table. It’s very fast; according to Dan, it’s an order of magnitude faster than the DNS servers it’s proxying, so there should be almost no load. For performance, it caches signed responses, but always passes queries to the real nameserver so that all scenarios work — but if it gets the same thing, it pulls up the cached signed response instead of resigning. Phreebird is open source and will be out in the next few weeks.
Distributed authentication is only interesting if it’s end-to-end. The current methods of DNSSEC lookups, chasing & tracing, are blocked by various types of servers, which makes operational implementation difficult. Phreebird also supports wrapping DNS (and DNSSEC) in HTTP, using a custom DNS server that exposes an HTTP endpoint and takes base64-encoded DNS requests. They claim there is no performance hit.
Likewise, while X.509 is flawed (since a certificate just has to chain to one of a few hundred root CAs by way of thousands of untrustworthy intermediaries, and there is no exclusion or delegation,) it can still be used to wrap DNSSEC — high performance, easy tunneling via DNS over X.509 over SSL. When one of these certificates is received, you just need to extract all the keys from the trust chain and validate it all.
From here, Dan got into the more interesting stuff — what he calls DKI (Domain Key Infrastructure.) What if you could use DNSSEC to create a working PKI system? Since DNSSEC lets you strongly authenticate a domain, you can then ask that domain to authenticate users, and trust the response since you have a key for the domain. To demonstrate this, he presented PhreeShell: federated identity for OpenSSH. With this modification, .ssh/authorized_keys2 contains identities (e.g. email@example.com) rather than keys — it makes delegating access trivially easy.
Trusting DNSSEC eliminates the scaling issues of federated PKI. Really, you’re not trusting DNSSEC so much as ICANN, but it seems a fairly good choice for a single root keyholder in that it has external political constraints and a delegation system designed to prevent operational dependency.
So how do we implement DKI everywhere? Eventually, by adding the functionality to everything — link in LDNS or libunbound. On Linux, you can make most things work by patching X509_verify_cert in OpenSSL, because practically everything calls out to it for crypto, but there’s nothing so simple in the browser world, where IE uses CryptoAPI, Firefox and Chrome use NSS, and most apps are cross-platform. For this, Dan has an app called Phoxie, which is a remote validation proxy for production browsers that allows certificate verification against DNSsec in current browsers. It’s also possible to make self-certifying URLs, but they look horrible and become unusable if the certificate ever expires or needs rotated, so they’re not a good solution.
Finally, we may get secure email out of this. If we can verify what server sent an email (which with DNSSEC we can), we can also in many cases be sure who sent it (as if the email came from a “respectable” domain it wouldn’t let users send mail as each other.) Right now the user experience around secure email is minimal, but our faith in it has been low — if most email could be verified, we could easily get to a world where email clients only stated mail was “From” someone if this fact had been cryptographically verified, and otherwise used some suspicion-inducing verbiage (e.g. the X-Supposedly-From header.)
Overall, Dan’s talk was interesting, but I find my enthusiasm is rather limited by lack of faith any of this stuff will be used. DNSSEC has been around for 18 years and no one uses it yet; having the root signed is a wonderful step and I hope it leads to the revolution in PKI Dan’s touting, but I also feel like I’ll believe it when I see it.
After all the talks, I dropped in on parties thrown by Mandiant, IOActive, and NetWitness, but unfortunately had to skip Tenable and Rapid7. There are so many parties, receptions, and events that it’s impossible to visit all or even most of them.