Sears & KMart’s Official Malware

CA’s Security Advisor Research Blog has an interesting post about a bit of malware they discovered when doing research for their Anti-Spyware product — the My SHC Community system. You’re offered a chance to join when you buy something from or The system offers you “special offers and promotions,” the usual marketing stuff — give up some privacy in exchange for discounts.

However, this system does rather more tracking than your average grocery store “membership card.” When you join, it installs a local proxy on your system and reroutes all your web traffic through it, including SSL sessions on port 443 (yes, it actually mounts local man-in-the-middle attacks on your online banking.) It then monitors this traffic, and based on some algorithm that has not been disclosed, sends some of it to comScore. Sears’s privacy policy promises not to share your data with anyone, and so does comScore’s, but it’s pretty hard to figure out what that means in this case. After all, comScore’s policy also promises not to collect any information that’s personally identifiable, but your My SHC Community data is tied to a personal ID at Sears, so in this case they’re clearly collecting personally identifiable information. Also, I think most people would consider copies of my online transactions in SSL sessions to be “personally identifiable;” while we can’t be sure comScore gets all of these (since the algorithm by which some traffic is rerouted is unknown), we do know the software is capable of sending them to comScore so we just have to take their word for it. Also, CA’s research did show an SSL transaction being rerouted, credit card numbers and all.

Bruce Schneier points out that if an average piece of spyware did this, it would be considered criminal. However, not only is Sears a large corporation and thus able to get away with this sort of thing (remember the Sony Rootkit debacle?), it also did have a pretty clear privacy statement that the user agrees to before installing it, so it may be on good legal ground. However, even if it’s legal, it’s a terrible idea for all involved.

First of all, the app is silent — once it’s been installed, it gives no indication it is monitoring your traffic, and no clear way to remove it. Second, the fact that the app comes from Sears, providing their privacy policy, but the data goes to comScore, while both parties claim the data is not shared with “any other party,” makes the privacy policies border on nonsensical. If it takes a lawyer to figure out what exactly your click-through license agreement means, it’s pretty disingenuous to claim that end users have been properly informed and have voluntarily waived their privacy rights. And third, comScore & Sears are collecting data (such as your credit card numbers and favorite non-commercial websites) that they don’t even want along with the information that they’re trying to collect. This puts on them a legal burden to protect and secure huge volumes of information that provides them no benefit.

When you have private data that you have a moral, legal, or regulatory responsibility to protect, the first thing to consider, before looking at security measures, is whether you need the data at all. It’s a lot easier to delete it and stop collecting it than it is to put in encryption systems, network access controls, auditing and logging systems, etc. A lot of companies collect reams of useless private data simply because “they’ve always done it that way,” and thus have to spend money protecting things of no value to them. This is probably the logic behind Sears’s data collection here — “we might as well have everything, it could be useful someday” without thinking about the cost that having that data imposes on the enterprise. You can’t have a catastrophic data breach if you don’t have the data.

This is also another symptom of a larger problem — people are increasingly unable to control the code running on their own computers. The separation of code and data is becoming increasingly porous with the web’s “active content,” and DRM software exists to keep the user from controlling their own system’s activity. Microsoft’s Vista User Account Control and Integrity Levels systems try to mitigate this, but it’s really not enough.

The problem is that they rely on the user to determine what code is allowed to run, but the user is unable to verify what that code will do until he runs it. It’s impossible for the computer to tell the user what it will do, as native code is unverifiable. With some technologies, such as Microsoft .NET code, it is possible for the system to tell the user what the code will do, but people writing malicious or underhanded apps like this Sears spyware and the Sony rootkit will not use these technologies, sticking to the unverifiable native code. It is my hope that virtualization will offer a way out of this in the long term — a way for each application to have its own enforceable security boundary. However, to avoid these same problems from occurring, application developers will have to give up functionality — that is, certain types of inter-application interaction will have to be categorically prohibited, which will sometimes inconvenience the user.

I think we’re more likely to see these solutions come from the open-source world than the commercial operating system world (i.e. Microsoft and Apple.) The commercial OS world is very concerned about a.) ease of use for the user, and b.) backwards compatibility for applications, as these things sell software. The open-source world is less concerned with these things, which inhibits their adoption in the marketplace but also results in software that is often much more under the user’s control than commercial software is. The real trick will not be developing these security technologies (not that that will be easy); it will be adapting them so that they can be used every day by non-technical users.

legal, privacy, products

If you enjoyed this post, please consider to leave a comment or subscribe to the feed and get future articles delivered to your feed reader.