BlackHat 2009, Day 2

The Thursday keynote was given by Bob Lentz, a Deputy Assistant Secretary of Defense for the United States. His main point was the paradigm shift from network-centric security to what he called content-centric security, and the fact that this devalues the protections around network perimeters. Static defenses don’t work when all the services being used are distributed and not found behind your firewall; the adversary is effectively always inside your firewall. Other notable but less positive things from the speech included that the Department of Defense considers “reducing anonymity” a strategic goal, and that the government still likes to prefix “cyber-” on everything, creating “cyberczar,” “cybertime,” “cyber green movement,” and even “cyber” as a standalone noun.

This year, BlackHat had an entire Cloud Computing track, running all day on Thursday, of which I attended a great deal. Part of my job involves protecting cloud computing services, so it seemed very relevant, and it’s certainly a hot topic in the industry right now. It began with Alex Stamos, Nathan Wilcox, and Andrew Becherer presenting a lecture on cloud computing models and vulnerabilities.

They defined cloud computing as not just virtualization, but including general-purpose hosts, central management, application mobility, distributed data, low-touch provisioning, and soft failover. They looked at three different cloud models: Software as a Service, Platform as a Service, and Infrastructure as a Service, and the differences & vulnerabilities in each.

The Software as a Service (SaaS) model is to outsource everything. From a security perspective it’s not necessarily a bad idea — the cloud provider probably has a lot more security people than the average company. On the other hand, you also outsource all your data — the recent Twitter “breach” via somebody logging into Twitter’s Google Docs account shows the risks this can entail. You lose the perimeter, endpoint management, the ability to use better authentication than simple passwords, credential quality controls, password reset processes, and realtime anomaly detection (though you hope the cloud provider has some of these things.) It puts all your eggs in one basket — if someone can read your email, they can access all your data. SaaS products include Office Live, Google Apps, and Salesforce.com. None of these have decent audit & rollback capability; Google Apps at least provides login history (though you have to write code & call an API to get at it) but still no read/write level auditing. Salesforce.com offers some write logging. However, the biggest flaw with SaaS models may well be authentication — all your security relies on a password, with all the vulnerability that entails, and you can’t even set a strong password policy (for all the good it would do you.) Google Apps actually lets you use a SAML-based SSO system; with other SaaS apps the best you can do is set a strong password policy via employee education.

Another issue with SaaS providers is the legal concerns — the cloud service EULAs tend to promise basically nothing and disclaim all liability. Also, they forbid malicious traffic — even pentesting your own app. There’s also decreased protection from search and subpoena. Since the data is stored with someone else, there’s no Constitutional protection from search, and even statutory protection is usually only for “communication.” Are Google Docs communication? Courts haven’t really defined this yet. The net result of this is that there’s no need for a warrant, probable cause, or even notice of a search — you can’t fight a seizure before it happens, but only after the fact.

Platform as a Service (PaaS) is the model of having a common development platform provided, yet allowing people to customize their applications. This is the model of Google AppEngine, Force.com, and (maybe) Windows Azure. (Azure is a unique case, kind of halfway between PaaS and IaaS; I’ll come back to this.) This section of the presentation was rather odd, as they really looked at the common web vulnerabilities (CSRF, XSS, SQL injection) and investigated how the platform protected you from them. In short, the answer is that they don’t. Some of the platforms have some inherent protection available (e.g. Windows Azure apps are typically ASP.NET, which has some built-in XSRF protection via ViewStateUserKey, XSS protection via encoders, and SQL injection via LINQ), but it’s up to the developer to actually use them. I found this section somewhat lacking, because it wasn’t really about the cloud platforms at all, but rather the common web technologies sitting on them.

The Infrastructure as a Service (IaaS) model is that taken by Amazon EC2 and similar services. It provides virtual machines with short-lived instances, non-persistent local storage, and available helper services. Though the presenters thought of Azure as very much a PaaS model, I think it’s a little fuzzier here — while Azure does not allow you to choose an operating system (the Windows Azure OS runs on every VM), it does not constrain you to anywhere near the degree of Google AppEngine or Force.com, as you can run arbitrary native code on it. It would be impossible to use AppEngine or Force.com to run anything but a web site; Azure is like EC2 in that it could be used for any flexible computing task, not just web sites.

The problems with IaaS services are usually hypervisor flaws or problems in the helper services. However, they brought up something very new here that I don’t think any of the current cloud providers consider — lack of entropy. Virtual hardware has mostly deterministic timings — input events don’t exist and block device events are abstracted. Thus, entropy is generated very slowly if at all. What’s more, in the case of Amazon EC2, since OS images are available to everyone, an attacker can get a copy of the stored entropy pool you’re using (which will never update after the image is originally created, thus depriving the system of another source of entropy) and eliminate it as well. The net result of this is that pseudo-random number generators — even cryptographically strong ones — are unreliable and may be predictable. This attack may or may not be practical given the specifics of the system in question, but for now you may not want to build your online casino or public key infrastructure in an IaaS environment! Cloud providers may actually have to have random number generation as a helper service as well, supported by quantum hardware.

Next, Jeremiah Grossman and Trey Ford presented a sequel to last year’s talk on “making money the black hat way.” Essentially, it was a survey of interesting hacks-for-profit that have been carried out recently. They noted that hacking activity is up this year (layoffs create more hackers?) and that 69% of attacks are discovered only because a 3rd party tells the company it’s been hacked.

Some of the interesting ones: eBay gave away 1000 items for $1 in a “Holiday Doorbusters” promotion. However, almost 100% of them were bought by bots, which was evident because the items were purchased before the item description page was even viewed. StrongWebmail.com had a contest to give $10,000 to whoever could hack into the CEO’s webmail account; rather than attacking the servers, the winners of the contest sent the CEO phishing mail with an XSRF in it that stole the contents of the account. (Amusingly, they got him to open the mail by labeling it “I think I won.”) Grossman & Ford also brought up cookie-stuffing, a type of affiliate fraud that’s been around for many years; it’s a well-known technique in the affiliate marketing world (basically you spoof the referrer while iframing the advertiser’s site on your site, then drive traffic to your site in ways that would not please the advertiser if they knew about it) but was apparently new to most of the BlackHat audience. They also brought up the technique of using embedded site search to fake authority links, another well-known “black hat” SEO technique. Marketers have apparently also begun spamming Google Maps with fake businesses, so as to come up first in “local searches” with their web-based and not-remotely-local businesses. A man in Britain used Google Earth to find all the lead roofs in London, then steal the lead tile in the middle of the night.

Some of the more ambitious hacks were more intriguing, though. One man discovered that you could order “advance replacements” for broken iPods from Apple just by giving them a credit card number as collateral; he used low-balance anonymous Visa gift cards to get 9,000 iPods. Another group put their garage band music in the Amazon and iTunes stores using Tunecore, then bought hundreds of downloads of their own album with stolen credit cards (thus getting a big check from Tunecore.) One thing to note is that these people got caught only because they weren’t trying not to. The iPod guy shipped all 9,000 to his home address; the Tunecore fraud was so blatant as to get this garage band’s album onto Amazon and iTunes top-10 bestsellers.

Finally, in South America, the system for getting logging permits for the Amazon rain forest was put online. An investigation discovered that 107 different logging companies had hired hackers to compromise the site, which was full of common web vulnerabilities. All told, 1.7 million cubic feet of lumber were smuggled out of the country. Scary permit systems in the United States that are now protected only by a web site: entrance visas, hazardous material transport, and open burning permits.

Next, Haroon Meer, Nick Arvanitis, and Marco Slaviero presented a talk on “Clobbering the Cloud.” This SensePost talk covered much of the same material as the iSec Partners talk earlier in the day. Their primary risk factors for cloud computing were as follows: lack of transparency from cloud providers (opaque EULAs), people don’t want to store regulated data in the cloud, vendor lock-in especially if the vendor goes out of business or stops offering the service, availability concerns (not just servers being down, but also things like password lockout from DoS attacks), monoculture issues (worms and cascading compromise are a big concern when you have thousands of perfectly-identical boxes), and trust in the cloud provider — you have to trust your cloud provider implicitly not to lose your data or have system failures. In addition, there’s the problem that the cloud is available to the bad guys, too — cloud boxes can be used for click fraud, DoS, or spamming (for a short time Amazon EC2 was the net’s #1 spammer.) Finally, the security of your environment is all in the hands of the account owner, who authenticates with nothing more than a password, and is (in most companies) probably a non-technical executive. Breaking into the CIO’s email now makes you the global administrator of the company’s entire infrastructure.

The presenters then went into more detail about attacks on Amazon Web Services (EC2, S3, SQS, and DevPay) in particular. I can understand why they chose AWS; due to its flexibility, it’s certainly the most fun of the cloud services for a hacker to play with (though Windows Azure is getting there, too.) EC2 is based on a modified Xen hypervisor, and supports running any OS you want that can run in that environment. Amazon provides 47 OS images, but users have contributed over 72,000 more, and an EC2 user can choose to boot any of them. Sometimes user images have interesting things in them, like other user’s EC2 credentials, for example.

Scanning EC2 is prohibited, but you can start up one of the images and scan it yourself via an SSH tunnel (or even have the machine scan itself.) They found 646 Nessus critical vulns in Amazon’s public images; you can also steal Amazon’s own Windows activation keys off their images. The DevPay system is interesting; it’s supposed to allow a user to make an image then charge other users for its use (e.g. to resell an application on EC2.) However, the presenters found you could get a DevPay image and modify its ancestor info (stored in the image itself) so as to credit use of it to you rather than the original author, then reregister it for others to use.

Simply putting up pre-owned (pun intended) images for others’ use can be an attack on AWS. If you prop up a box with a good name (e.g. “Ubuntu 9.04 Standard Image, All Patches”) and a low-numbered ID (so it shows up at the top of the list), and people will use your image to host their apps! You can get a low-numbered ID simply by registering repeatedly; since it’s a hash, eventually you’ll get lucky and have one start with zero. You can only have 20 images per account, but you can create 20 accounts in 3 minutes, so there’s no effective limit.

After that talk, I went over to the mobile track to hear Jesse Burns talk about Android. Android interests me because I’d really like a phone that behaves like a computer (i.e. a device I own) rather than like a toy the phone company is reluctantly allowing me to touch, and Android’s open-source nature has real potential to give me that. It’s not that I trust Google any more than any other wireless provider, just that the platform seems much more hackable and thus inherently harder to control.

Android has a dual security model — Android permissions on various privileges, plus Linux permissions on the filesystem. Applications have their own UIDs/GIDs and are thus somewhat isolated from each other. A package (application) is made up of Activities (GUIs,) Services (background tasks,) Broadcast Receivers (event handlers,) Content Providers (databases,) and Instrumentations (used for testing.) For interprocess communication, there are Intents, which are sets of name-value pairs with routing information. Applications are written in Java, but they’re not applets (i.e. no Java sandbox.)

Available attack surfaces for a malicious app include other apps, system services under privileged accounts (like the clipboard or the surfaceflinger, which draws the UI and owns the screen,) the binder (the inter-process communication system, similar to domain sockets,) and anonymous shared memory. There are a variety of tools available — one can just install a bash shell on Android (either interactively or over the wire or network,) use logcat to look at logs, view Android system properties, check the /proc and /sys filesystems, run dmesg to get kernel output, and all the usual Linux attacks. There’s also a file in /data/system/packages.xml that contains data about every installed app, including the location of the app and its manifest. /proc/binder contains a transaction log of the inter-process communication, and /proc/binder/proc contains data of all the processes themselves.

Another interesting detail about Android is the “secret code” handler. When you dial *#*#somenumber#*#*, this triggers the secret code handler for that number, which can do pretty much whatever an app wants it to do. The only secret codes on “stock” Android are 8351 and 8350, which turn voice dialer logging on and off, respectively. However, wireless providers may add additional codes — the presenter found some in T-Mobile’s MyFaves app, for example. Finally, the presenter had a series of Android hacking apps he’d developed — Manifest Explorer (to view the system manifest and the manifest of each app, such as to see what events they react to,) Package Play (to see the parts of a package or to directly activate Activities,) Intent Sniffer (to view Intents as they’re routed at runtime,) and Ill Intent (an Intent fuzzer.)

The last presentation of the day was Bruce Schneier, whose talk was entitled Reconceptualizing Security. Mostly, he gave the same speech he always does, about fear, psychology, security vs. security theater, why we mis-estimate risk, etc.; pick up a copy of Beyond Fear or Secrets and Lies if you want the details. However, during Q&A he did also talk about the attack on AES-256 that was just demonstrated. It’s a feasible attack on 10 rounds of AES-256 (out of 14,) in 242 time. It’s a related-key attack that works only on 256-bit keys (not on shorter ones,) so there’s no reason to panic right now, but it does show that the margin of safety on AES is smaller than we thought. There may need to be a Double-AES in the same way Triple-DES was devised as a stopgap until a new cryptosystem is developed. Alternately, the standard could be changed to increase the number of rounds, but that would require replacing or updating all the AES-based crypto hardware out there.

And that wrapped up BlackHat 2009. Overall, there was nothing as Earth-shattering as last year’s DNS exploit, though it turns out that the SSL issues are pretty nasty. After BlackHat, I hit the Microsoft Security Researcher Appreciation Party at Christian Audigier, which was actually a pretty good party this year without any of the problems of previous years. It’s only drawback was that it only ran two hours. However, at this point DefCon festivities had begun, so there was still plenty going on; my next post will get into DefCon 17.

anonymity, attacks, crypto, hardware, industry, legal, networks, passwords, risk, society

If you enjoyed this post, please consider to leave a comment or subscribe to the feed and get future articles delivered to your feed reader.