CA’s Security Advisor Research Blog has an interesting post about a bit of malware they discovered when doing research for their Anti-Spyware product — the My SHC Community system. You’re offered a chance to join when you buy something from sears.com or kmart.com. The system offers you “special offers and promotions,” the usual marketing stuff — give up some privacy in exchange for discounts.
Bruce Schneier points out that if an average piece of spyware did this, it would be considered criminal. However, not only is Sears a large corporation and thus able to get away with this sort of thing (remember the Sony Rootkit debacle?), it also did have a pretty clear privacy statement that the user agrees to before installing it, so it may be on good legal ground. However, even if it’s legal, it’s a terrible idea for all involved.
When you have private data that you have a moral, legal, or regulatory responsibility to protect, the first thing to consider, before looking at security measures, is whether you need the data at all. It’s a lot easier to delete it and stop collecting it than it is to put in encryption systems, network access controls, auditing and logging systems, etc. A lot of companies collect reams of useless private data simply because “they’ve always done it that way,” and thus have to spend money protecting things of no value to them. This is probably the logic behind Sears’s data collection here — “we might as well have everything, it could be useful someday” without thinking about the cost that having that data imposes on the enterprise. You can’t have a catastrophic data breach if you don’t have the data.
This is also another symptom of a larger problem — people are increasingly unable to control the code running on their own computers. The separation of code and data is becoming increasingly porous with the web’s “active content,” and DRM software exists to keep the user from controlling their own system’s activity. Microsoft’s Vista User Account Control and Integrity Levels systems try to mitigate this, but it’s really not enough.
The problem is that they rely on the user to determine what code is allowed to run, but the user is unable to verify what that code will do until he runs it. It’s impossible for the computer to tell the user what it will do, as native code is unverifiable. With some technologies, such as Microsoft .NET code, it is possible for the system to tell the user what the code will do, but people writing malicious or underhanded apps like this Sears spyware and the Sony rootkit will not use these technologies, sticking to the unverifiable native code. It is my hope that virtualization will offer a way out of this in the long term — a way for each application to have its own enforceable security boundary. However, to avoid these same problems from occurring, application developers will have to give up functionality — that is, certain types of inter-application interaction will have to be categorically prohibited, which will sometimes inconvenience the user.
I think we’re more likely to see these solutions come from the open-source world than the commercial operating system world (i.e. Microsoft and Apple.) The commercial OS world is very concerned about a.) ease of use for the user, and b.) backwards compatibility for applications, as these things sell software. The open-source world is less concerned with these things, which inhibits their adoption in the marketplace but also results in software that is often much more under the user’s control than commercial software is. The real trick will not be developing these security technologies (not that that will be easy); it will be adapting them so that they can be used every day by non-technical users.