Rethinking Cyber Defense

Originally published as:

Aucsmith, David. “Rethinking Cyber Defense.” High Frontier 7, no. 3 (May 2011): 35-37.

 Computer systems[1] have been under attack almost from the beginning of computers themselves.[2]  Over the years we have developed many tools and processes for keeping them secure.  Arguably, we have not been very successful.  We have now reached the stage that the cyber environment has become a unique war fighting domain.  A war fighting domain in this sense is defined by the unique tactics, techniques, and procedures (TTPs) needed to defend or attack computer systems in military enterprises.  We have not been able to create computer systems that are immune to the attack weapons (malicious computer code, malicious network communications, or social engineering) of our adversaries.  Years of failure suggest that we must rethink how we protect computer systems.

Microsoft products and customers have been a part of cyber-attacks since the first personal computer was connected to the Internet.  Over this time, we have developed insights into how and why cyber-attacks succeed or fail.

Why Systems Are Not Secure

When I first started working at Microsoft, my Department of Defense colleagues would frequently ask a question that was roughly framed as, “Why don’t you guys just write better software to begin with then we would not have these problems.”  Obviously the problem is more complicated than that question suggests.  The reasons that it is more complicated are important to understanding how computer systems might be made more secure in the future.  The answer depends on two factors.

First, we have adversaries.  Anytime that one has adversaries – who are adapting their weapons, tactics, and deployment to the development of your technology – one has a classic arms race.  I would respond to my Department of Defense colleagues with, “I will create a computer system that will remain secure as soon as you build an airplane that cannot be shot down.”  This is the dynamic of adversarial relationships.  However, adversarial relationships in cyber space are further exacerbated due to the highly asymmetric nature of cyber engagements; the significant mobility afforded the attacker, and the difficulty of attribution.

The second factor in building secure computer systems is that we are building incredibly complicated systems.  We are building computer systems that are far more complex than our ability to completely model or understand their functionality, in a formal sense.  Indeed, as computer systems become more multi-threaded, multi-processor enabled, and data driven, they will become more non-deterministic and less able to be modeled or completely understood.

The reality is that we are building incredibly complex computer systems, which we have no formal way to analyze, and we are placing them in front of experienced, resourceful, and determined adversaries.  This set of circumstances guarantee that vulnerabilities will be found in the computer system, weapons and tactics will be developed to exploit those vulnerabilities, and that the computer system will be successfully compromised at some point in the future.

Why Security Testing Is Only Part of the Answer

The traditional approach to achieving secure computer systems has been to develop computer security evaluation and testing criteria.  Over the years, there have been numerous attempts to create certifications for computer security.  These include the Department of Defense Trusted Computer System Evaluation Criteria (TCSEC), better known as the Orange Book, published in 1985.[3]  More recently, there has been the Common Criteria (ISO/IEC Standard 15408).[4]  These certification methodologies, and others like them, specify a set of security features and assurances and then rely on compliance testing and analysis.

These types of methodologies identify where the computer system as built does not meet the criteria as specified.  The problem with this approach is that very frequently vulnerabilities occur because the computer system as built has additional functionality not specified by the criteria.  A buffer overrun is an additional entry point not in the specification.  The failure of the certification-based approach is that it is impossible to create a systematic way to find all such vulnerabilities.  This can easily be verified by asking, “How do you know when you have found them all?”

This is not to say that security certifications are useless.  They are essential for confidence in identifying those features which do not meet standards.  They are necessary but not sufficient.  Where they do not work well, where a computer system has additional functionality not specified, we need a different methodology, one that allows one to approximate a search for additional functionality.  One way to do this is to at least ask the question, “Of all the bad things I know about, are there any present in the system?”  We call this threat modeling. [5]

Threat modeling, in this context, is a methodology where we characterize the code constructs that lead to vulnerabilities and successful attacks.  We search for and remove those constructs.  What makes threat modeling particularly valuable is that it is not static.  As new code constructs that lead to vulnerabilities are identified, we can add them to the threat model database for use in analyzing future systems or even for reanalyzing past systems.  As useful as threat modeling is, even in conjunction with classic certification methodologies, it cannot alert you to vulnerabilities that have never before been seen or imagined.

Cyber Defense as Maneuver Warfare

Since we must place highly complex computer systems in the presence of adversaries – computer systems that cannot be definitively tested – we need to approach computer security in a new way.  We must acknowledge that we cannot build computer systems that are secure and will remain so in perpetuity.  Rather, we must build computer systems that are adaptable, configurable, and give us the ability to anticipate and respond to our adversary’s behavior.  It is the difference between the Maginot Line and maneuver warfare.  We need to create the equivalent of maneuver warfare in cyberspace.

To make systems adaptable, we need to be able to change their behavior to input or change their attack surface.[6]  One of the ways we do this is by patching the software of the computer system.  Patching and defensive updates, such as anti-virus signature updates, are how we achieve maneuver warfare in cyberspace.  We should not view patching as a failure, but rather, as successfully maneuvering the software baseline against an attack by an adversary.  We need to strive to make patching frequent, quick, and transparent.

As it is likely that an adversary will, at some time, identify and exploit a previously unknown vulnerability, we must approach the problem differently.  We must make the use of an unknown cyber weapon prohibitively expensive, in a broad sense, for our adversary to use.  While we have no choice but to allow our adversary the use of a new weapon, we should immediately sense the attack and then rapidly adapt every other computer system in the enterprise to be immune to the weapon’s future use.  Our adversary can use the weapon only once after which it is useless (assuming computer systems are patched, updated, and configured correctly).

This change in philosophy implies a level of sensors, communications, and what is now called active defense that is rarely found today.  However, it is obtainable with current technology.

Sensors and Intelligence

I use the term sensor here in its broadest context.  It is some mechanism, software, or process, which provides information about the state of the system in which it is deployed.  This information is then used to generate indications and warnings – that is, to generate intelligence.  Ultimately, intelligence is derived from a rich and diverse population of sensors and is aggregated and correlated for maximum usefulness.

When organizations develop a cyber-situational awareness capability, typically they instrument their IT environment as a sensor.  Usually this includes the deployment of intrusion detection systems, anti-virus systems, network traffic analysis systems, and the like.  What we have found over time is that these sensors are our worst sensors for situational awareness.  They give no indication of what our adversary is planning, sometimes they can show that we are under attack, and they are excellent for forensics after an attack has occurred.  To put this into a military metaphor, this is akin to not knowing you are under attack until your adversary is in the foxhole with you.  This is a little too late.  In no other war fighting domain would we accept this level of situational awareness.

The question then is how to develop and deploy sensors that can give us indications and warnings of our adversary’s intentions and actions.  That is, provide intelligence.  While this may seem like an impossible task, there are practical sensors that have some of these characteristics.  Honeypots are one such sensor.[7]  If placed in desirable locations, they may provide information about attack tools and weapons.  If they are designed to be immune to known attacks then the only successful attacks will be ones that are hitherto unknown.  They can capture new tools and techniques and send that information to analyst or analysis machines where patches, signatures, settings, and/or heuristics can be developed.  The goal would be to rapidly disseminate the patches, signatures, settings, and heuristics to all other enterprise components to make them immune from the same attack.  Thus, the only successful attack using a new weapon would be the attack on the honeypot.

Traditional intelligence methods, when targeted against the cyber domain, may also be good sensors.  For example, collecting open source intelligence about cyber-attack tool development could identify potentially unknown weapons.   Again, the point is to identify a new weapon by any means available and then rapidly immunize the enterprise against it.  Other potential sensors include heuristics-based anti-virus software, network scanners, configuration monitors and such – as long as they are tied to an automated processing capability that can use that intelligence to develop suitable counter measures.

In order for a sensor to be used for automated defense at scale, it must have a very good signal to noise ratio.  False positives must be rare.  Most network-based sensors do not have this property.  Network-based sensors have difficulty knowing which specific traffic is from legitimate processes and which is from malicious processes.  End point (or host-based) sensors, such as anti-virus software, honeypots, and the like are able to disambiguate traffic to their hosts because malware must “reveal” itself to take control of the system.

Hygiene

To be effective, sensors must have a high signal to noise ratio.  There are two ways to do this.  We can develop sensors that have a very high selectivity or we can reduce the background noise.  One of the ways to reduce the noise is to improve the security posture of the enterprise as a whole.  Out of date or unpatched computer systems succumb to attacks from weapons which have long since been identified and for which immunization is available.  That is, if they had been patched or brought up to the current version of that software, they would not have been compromised.  If computer systems easily succumb to know attacks, it is superfluous to protect them from unknown attacks.  This implies a minimum level of hygiene that must be present in the enterprise as a precondition for effective defense.

There are other issues besides being unpatched or out-of-date systems that contribute to security vulnerabilities, such as, poor system administration or incorrect system settings.  Indeed, dynamic modification of system setting may be an effective way to counter an attack,  For example, dynamically disabling auto-execute of USB storage devices would have immunized computer systems from the attacks utilizing USB storage as an attack vector.  This level of hygiene is technically easy to do but rarely attained in practice in most enterprises.  It requires tools and processes for distributing patches and configuration changes quickly and it requires a willingness to upgrade systems to new versions of software and hardware.

Implications

The traditional view of computer security has not led to secure systems.  We must rethink how we approach cyber defense.  Computer systems cannot be made permanently secure in an adversarial environment.  If we accept that premise, then we are forced to make computer systems adaptable and resilient.  To do so, we must have knowledge on which to base our adaptations and we must have a process for handling that knowledge at a speed that out paces our adversary’s ability to exploit the vulnerability.  This requirement leads to the conclusion that we need sensors that can detect the first use of a cyber-weapon and the tools, processes and mechanisms to communicate the resultant knowledge to the entire enterprise.

Also inherent in this argument is that we must ensure that the enterprise can only be attacked by unknown weapons else there is little incentive for the adversary to deploy new or more sophisticated weapons.  Why should an adversary use new weapons against an enterprise when old ones work sufficiently well?

Achieving cyber defense then requires three things:

  • Enterprise wide hygiene – Up to date, correctly configured, and patched systems.
  • Sensors that can detect the first use of a new weapon – Preferably outside of the enterprise.
  • Processes for using the knowledge of a new weapon to immunize the enterprise – At speeds greater than the reaction time of the adversary.

There are working examples of each of these requirements deployed in enterprises today.  No new or revolutionary technology is required to achieve this.  It simply requires the will to do so.

————————–

[1] Computer systems in this context mean any computational device including desktop computers, servers, routers, and firewalls.

[2] Kerbs, Brian. A Short History of Computer Viruses and Attacks. The Washington Post. February 14, 2003.

[3] Department of Defense, Trusted Computer Security Evaluation Criteria – DOD 5200.28-Std (Washington, DC: Department of Defense, 1983).

[4] Joint Technical Committee ISO/IEC JTC1, Information Technology. Informational Technology: Security Techniques : Evaluation Criteria for IT Security. (Genv̀e: ISO/IEC, 1999).

[5] Frank Swiderski and Window Snyder, Threat Modeling (Redmond, WA: Microsoft Press, 2004).

[6] Michael Howard and David LeBlanc, Writing Secure Code, 2nd ed. (Redmond, Wash.: Microsoft Press, 2003).

[7] Honeynet Project. Know Your Enemy: Learning About Security Threats. (Boston: Addison-Wesley, 2004).

Leave a Comment ↓

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: