Deterring the Internal Attacker

On January 21st, 2008, the major French bank Société Générale lost $7.09 billion attempting to unwind unauthorized trading positions taken by Jérôme Kerviel, a futures trader with the bank. Kerviel had taken positions worth $73.3 billion, far above not only his trading limits but the bank’s entire market capitalization. The loss taken by unwinding the positions during a declining stock market was the largest rogue trader loss in history, dwarfing the $1.4 billion loss by Nick Leeson that collapsed the venerable Barings Bank in 1992.

For all that we in the security industry picture threats coming at our companies from without, sometimes the greatest threats lie within. No external hacker has ever done the kind of damage that rogue insiders like Kerviel and Leeson are capable of, yet we focus on putting firewalls around our companies, rooting out worms and viruses, and securing our websites. While these are undoubtedly important, it is equally important to protect against internal adversaries — and often much more difficult.

The Problem of Trust

Companies must trust their employees — without the employees, there is no company. Accountants and traders are trusted with financial records, system administrators and information security personnel are trusted with access to critical files, physical and cleaning personnel are trusted with physical access to the facilities, and managers are trusted with company secrets, strategy, and intentions.

IT employees and developers are specialists.  As systems increase in complexity, those trusted with building and maintaining those systems are required to obtain knowledge further and further from most people’s understanding. Often, knowledge of how to build and maintain these systems also involves the knowledge of how to subvert them. IT engineers and developers know how their systems break down — they know their weak points, where they’re being watched and monitored, and where no one is looking. This problem isn’t unique to information technology — an aircraft mechanic probably knows how to sabotage a plane without leaving a trace, and members of police and military bomb squads are experts on explosives and what cannot be detected or tracked. And as recent news has demonstrated, traders in brokerages and banks know how the internal controls of their corporations work, and where they break down. Internal attackers are thus the most dangerous of all — they are already equipped with the kind of domain knowledge that an external attacker might need to spend weeks or months gathering.

Although we cannot entirely abandon trust in a company’s employees, we should consider where this trust comes from and whether or not it is warranted. Many companies sharply divide the level of trust and privilege given to employees vs. that given to contractors and vendors within their IT and development departments. The theory is that employees are allied to the company for the long term, and compensated with long-term benefits like retirement plans and vacation time that they will be unwilling to risk for short-term gain while vendors and contractors have less loyalty since they come and go as needed. However, in today’s IT world, is this really the case? I do not doubt that the contractors feel little loyalty for the company, but it is increasingly doubtful that the employees do, either. The average IT employee’s tenure at a corporation is now under 18 months — and thus they place little value on long-term benefits. Books like Corporate Confidential advise employees to view their employment relationship as, if not outright adversarial, at least mutually exploitative, to be dropped by either party as soon as it becomes in their interest to do so. Employees see that corporations no longer feel loyalty to them — the days of the job for life are over — and so loyalty to the corporation has gone as well.

Of course, lacking a strong sense of corporate loyalty does not lead most employees to embark on rogue-trading schemes, steal from their employers, commit electronic sabotage, etc. And even in the 1950s heyday of the organization man and the corporate family, some people took advantage of their employees and ran off with stolen fortunes. Some people are thieves and will steal given the opportunity no matter how well-treated they may be. Others are incorruptible, bound by their own moral code that would prevent them from stealing regardless of opportunity.  The vast bulk of humanity, though, is somewhere in between.

These employees are not likely to become attackers, and trusting them is a necessary part of doing business. However, this trust need not be absolute — we can trust, but verify. While we may not be able to prevent every internal attack, we can deter them, and make them less likely to occur. Steps can be taken to help keep most people honest, reducing both the incentive and the opportunity for theft.

Building Employee Loyalty

The days of the job for life and absolute loyalty to the corporation are probably over for good, inasmuch as they ever existed at all. However, the fact remains that internal attacks, particularly those not motivated by theft but rather simple vandalism, are much more likely to be carried out by disgruntled and angry employees than by content ones.

IT employees and developers are sometimes a strange breed — the sort of person that chooses to spend their time with technology is often different from the sort of person who chooses to be a manager.  So if it’s not a good retirement plan, an increase in vacation time after 5 years, and a promise of stability and long-term employment, what does build loyalty and goodwill with technical employees?

(Of course, any generalization about a type of person is going to be more accurate for some people than others, but I’ve found these to be useful rules of thumb for dealing with technology employees.)

These are important, but will not, of course, make every employee perfectly happy. There are some things that technical employees have no patience for at all:

When it comes to performance management, technical employees need to be told, directly and clearly, how they’re doing and what needs improvement (if anything. ) Not being people-oriented, they often can’t read you. They don’t know if you’re happy with them or not unless you tell them, and they’re certainly not going to ask. While they deal extremely well with technical ambiguity — they love to solve problems, so an incoherent mess from a technical perspective is just a challenge to overcome — they don’t deal well at all with ambiguity in other contexts. Clear expectations and consistent feedback make their job simply another a problem to be solved, which makes it much more satisfying to them.  Without this feedback,

For many managers, these may seem like obvious guidelines — but they’re often problems in companies, particularly in IT and development departments of nontechnical companies. These factors mean a lot to many technical employees — often a lot more than traditional compensation. The best prevention against malicious insiders is to keep the insiders from becoming malicious in the first place by ensuring that the company earns their trust and respect.

Reducing Opportunity for Attack

Unfortunately, no matter what your company does, some people aren’t going to love their jobs. In addition, presented with the opportunity to steal, people are going to be tempted — and the greater the opportunity, the greater the temptation. Thus, it is important to reduce the opportunity for theft.

The traditional information security controls are often useless against insiders. The firewall provides no protection at all against someone already inside. Anti-virus and anti-malware systems matter not at all to someone who doesn’t need to gain access to a PC on the network, as they already have access legitimately. Network access controls are impotent against the domain administrator, who has the authority to alter access control lists at will. Obfuscation and hiding secret data provides no defense against the developer tasked with performing the obfuscation and hiding.

Fundamentally, a system designed to provide security always involves an implied question — secure from what? The vault door in a bank secures against burglars coming in in the night — not against the bank manager turning rogue. Alarms secure against armed robbers, not against tellers sneaking cash out of the drawer. Security cameras watch the tellers, but do no good against computer hackers or fraudsters. Reducing the opportunity for insiders to attack the company means considering how insiders differ from outsiders, and what security measures may be employed against them.

The primary advantages of an insider are twofold: knowledge and authorization. They have knowledge of the defenses — Jérôme Kerviel had worked in Société Générale’s internal audit and control department, so he knew exactly how they searched for and detected rogue trades. And they have authorization in that an internal attack often does not involve any sort of elevation of privilege — only an employee misusing their legitimate authority. Even the right to be inside the building, rather than having to break in through a firewall, is a measure of authority an outsider lacks.

However, insiders also have a disadvantage as compared to outsiders: proximity. It is often much easier to verify a suspicion that someone has committed a crime than it is to find the culprit to begin with. As is often depicted in crime dramas and classic mystery plots, investigators have a much easier time finding out who committed a crime when they have specific suspects to question and investigate than when a crime is committed by a random stranger with no known connection with the victim. Fingerprints and DNA evidence do little good if you have no suspect to compare them to. The same goes for electronic forensics — a hacker will often leave plenty of evidence of their activity on their own computer, and a monitoring device at their ISP would likely detect their activities. However, if the hacker is external, or even in a foreign country, as a security professional you’re unlikely to have any idea where their computer is, let alone have access to it. When an insider attacks, on the other hand, the traces can be very obvious. Attacks come from IPs within your perimeter, and your own monitoring equipment might have seen the entire attack end-to-end. The simple fact that there are only so many people inside the company capable of mounting an electronic attack limits the suspects and allows each to be investigated.

Smart insiders know this. While an outsider may believe he is able to hide from detection simply by being a needle in a haystack (how many companies really inspect all their edge firewall logs, even with an automated process?), an insider knows that he’s under observation and has a substantial chance of getting caught. Thus, he will almost always take steps to cover their tracks — steps an outsider would take, too, but the insider has the advantage of legitimate authorization to bolster his abilities.

Deterring internal attackers, then, involves neutralizing their advantages while maximizing their disadvantages. There is little to be done about their first advantage (knowledge of internal procedures), but actions can be taken to mitigate the power of legitimate authorization and to maximize the disadvantage of proximity.

Preventing Abuse of Legitimate Authority

Developers can modify the source code of your product — that’s what developers do. System administrators can change permissions on files and access secured areas — that’s their job. However, no one person should have the ability to do everything — this is the principle behind separation of duties.

Separation of duties enables legitimate tasks to be carried out while making it more difficult for these same powers to be abused. There are three basic controls that can be placed on a power to help prevent abuse:

For example, imagine your company needs to deploy new code to a server in a datacenter. The person responsible for the authorization function sets the access control policies on the various machines to determine who has access. The person or system responsible for the recording function makes entries in change-control logs so that it is clear what has been done. The person with custody of the system actually places the new files on the server. In a small company — or one with poor internal controls — these could all be the same person.

If these tasks are all handled by the same person, the potential for abuse is very high. If this person wants to propagate malicious code to the servers that monitors transactions or even steals money from accounts, he can do so. He can authorize himself or another (possibly even a fake account) to make any change desired, carry out the task, and then erase or suspend the logs or records of not only the action but also the authorization changes.

On the other hand, if separate people are responsible for each of these tasks, none of them is capable of perpetrating a fraud on their own. This process could be organized as follows:

This makes fraud much harder. A member of the product team can tamper with the code, but has no way to actually get it into the datacenter. An operations engineer can access the datacenter, but lacks access to the code. And either one making a change leaves a trail — since audit logging is controlled by another team within IT, neither are able to turn auditing off or simply overlook suspicious entries.

Maximizing the Chance of Detection

Separation of duties limits the ability of a person with legitimate authority to abuse it. However, the is another thing that can be done to those people with the ability to abuse their authority from actually doing so — cause them to believe they are likely to be caught. Internal attackers know what audit and logging systems are being used within an environment, and they know where the “blind spots” in those systems are. Many criminals commit a crime only when the opportunity presents itself.  By eliminating failures in monitoring, we eliminate temptation as well as improving our forensic abilities.

Most of the systems used in a modern IT environment have extensive auditing capabilities. (Note that I am using the word “auditing” in the sense of creating an audit trail, not in the sense of some external consultant or accountant reviewing that trail.) Windows machines create an event log of almost everything that happens on them; in an ActiveDirectory domain, security events are also logged on the domain controller. UNIX/Linux/Solaris machines create various system logs, and have the ability to send them to remote machines as they occur. Databases like Oracle and SQL Server have fine-grained audit capabilities and are able to record every access to sensitive data and even detect potential data aggregation attacks. Web servers record every access, as do keycard-based entry control systems, VPN concentrators, firewalls, and a variety of network devices. An attacker, even an internal one, leaves a bewildering array of changes, alerts, and traces every time he does anything.

However, this does little good if no one notices the tracks! In addition, they are often ephemeral — a Windows Security Event Log will grow too large and begin overwriting itself in a matter of hours in a large corporation. If the logs are not available to investigate an incident, they might as well not exist at all.

One of the most powerful ways a company can prevent internal attacks is with the implementation of a Security Information and Event Management product. There are several of these on the market (I have experience implementing the SenSage event data warehouse, but ArcSight, Symantec, IntelliTactics, Computer Associates, and others have competing products,) but the idea behind all of them is to gather event data from a variety of sources and aggregate it in one place. This has two major advantages:

Different SIEM systems have different advantages, and while all will provide separation of duties, some are better at handling massive data volumes than others. Likewise, the data mining involved in event correlation is still a black art in many cases, so different systems have different capabilities in that regard. However, just knowing that a SIEM exists, is monitored, and is out of reach for would-be fraudsters to tamper with can be a powerful deterrent against rogue employees.

Conclusion

The possibility of internal attacks is an unfortunate consequence of the specialization of modern society — those with the capability to build and maintain complex systems are often those best able to compromise and abuse them. However, good design of internal controls centered around separation of duties combined with judicious use of technical information-management solutions greatly reduces the opportunity for insiders to turn against a company’s infrastructure.

authentication, networks, products, risk

If you enjoyed this post, please consider to leave a comment or subscribe to the feed and get future articles delivered to your feed reader.