The Character of Technology Defines the Character of War

The third tenet of this theory is that it is the characteristics of the technology, which created the domain, that define the character of war fought in the domain.  The cyber domain is unique among domains of war in that it is a manmade domain.  While it occupies geographical space, it is not defined by space.  Cyberspace is the virtual environment created by the interconnected network of computing devices, communications channels, and the humans that use them.  It is not war fought with sharp sticks or even rockets.  It is war fought in a virtual environment with and by information.

The Short History of Cyberspace

Cyberspace, as we understand it today, had its genesis in the work of the Advanced Research Projects Agency (ARPA) in the early days of network computing.  In 1969, their work, along with ideas contributed by both the Massachusetts Institute of Technology (MIT) and the British National Physical Laboratory, led to linking four computers together that became ARPANET – the progenitor of today’s Internet.

The potential threat of a surprise attack by Soviet nuclear forces simultaneously prompted the US Air Force to fund a research project to investigate how one might build a communications network that could survive such an attack.[1]  In 1964, Paul Baran, working for the RAND Corporation, published a series of papers, which addressed this problem.[2]  Baran’s idea was to create a web of computers and/or communications devices which would be linked by transmission lines and which would have no centralized control centers.  This web of interconnected computers would then send messages broken into small “packets” through the network.  He recognized that the distributed network of computers would also need to have an “intelligence” to survive a massive attack, but the intelligence must be distributed as well.  His idea was that the distributed network would have no preset routing; rather, each computer in the network would use information in the message itself to find the optimal route for the message.  Each computer in the network would maintain a “routing table” that would record at what speed recently sent message-packets reached their destination.  The computers would thus be able to make intelligent decisions as to how to route their messages based on ever changing historical data.  In effect, what Baran created was a network comprised of a number of unmanned digital switches, which possesses a self-learning capacity within a changing environment.

The Structure of Cyberspace

As cyberspace has evolved, it can be thought of as composed of computers (devices performing computation) having some degree of intelligence , which are linked together by a network of communications channels, and used by people to transmit, manipulate, or receive information.

Computing Devices

The computing devices that create cyberspace come in many forms and perform many tasks.  They share the ability to take information as input, manipulate that information according to an embedded logic or program, and then output information.  Examples of such computing devices in cyberspace include sensors, routers, switches, personal computers, controllers, output devices, or the myriad of other components.  What is important is that the computing devices of cyberspace have “intelligence,” due to their programming, and they respond to input based on that programming.  They may have state.  That is, they may keep a history of prior input or computational results, which they use to inform future outputs.  Different inputs or different histories of inputs may generate different outputs.

Communications Channels

Communications channels connect computing devices in cyberspace.  They carry information from one component to another component.  All that is required to substantiate a communications channel is the ability to deliver information, regardless of the means.  Examples include fiber optic cable, microwave beams, light, or even mailing a disk drive from one point to another.  Communications channels have many properties, but, for our purposes, we will consider:

  • Path – Path is defined as the arc connecting computing devices.
  • Reliability – Reliability is a function of the noise of the channel.
  • Bandwidth – Bandwidth is the measure of information per unit time.
  • Availability – Availability is different from reliability.  A highly reliable channel may only be available for a limited part of the day.

Communications channels may be one-to-one, one-to many, or many-to one.  They may be static or dynamic or they may be unidirectional or bidirectional.  All that matters is that they pass information.

People

People are a component of cyberspace – perhaps the least reliable component.  They generate, manipulate, and transmit information according to highly variable “programing.”  Their contribution to the characteristics of cyberspace is mostly as a component of Clausewitzian friction – discussed later.

Implications of Cyberspace Structure

Starting with simple goals and an elegant design, cyberspace has evolved into a domain whose total structure is too complex to be completely understood or analyzed.  The structure of cyberspace, the consequence of its architecture and components, gives cyberspace inherent properties that are important in considering war in the cyber domain.  In particular, the structure of cyberspace limits what one can know about its functioning.  For example, it limits the ability to attribute actions to individuals or organizations.  Among these are:

  • Self-organization – Components of cyberspace (computing devices, communications channels, and humans) can be added or removed.  They can be moved or modified and cyberspace will autonomously recognize them and reorganize accordingly.  An important concept for attribution is that there is no requirement for cyberspace to keep any information regarding previous organizations.
  • Historical learning – Each node or computing device in cyberspace routes packets based on the aggregate efficiency of the communications channels that were used to route previous packets.  That is, when routing a new packet, a cyberspace computing device routes it through the historically most efficient path to its destination.  The actual efficiency of the newly routed packet’s channel is then used to update the historical understanding for future routing.  For the purposes of attribution, a computing device cannot tell you how a packet was routed, only how it will route a future packet.
  • Scale-free network – Unlike the distributed network that was originally envisioned, cyberspace has organized itself around nodes or hubs of high connectivity.[3]  Any attempt to trace a path back through such high-density nodes may be impossible.
  • Recursive organization – Cyberspace is organizationally recursive.  That is, subsets of cyberspace have the same general features and organization as cyberspace in whole.  One can think of cyberspace as being composed of systems of systems.[4]  The complexity of the overall structure is masked by the abstraction of subsets.  For simplicity, subsets do not necessarily present information up to the next level.  Some information for attribution may not be presented to upper levels.  For example, user names local to a subset (e.g., a company’s network) of cyberspace are not forwarded to the next level (e.g., Internet Service Provider).
  • Local knowledge – Cyberspace operates globally based only on local knowledge.  Each component of cyberspace makes decisions based solely on its own local knowledge.  The overall behavior of any given subset of cyberspace is the aggregated result of the effects of each component’s local decision.
  • Ephemeral knowledge – Knowledge in cyberspace components is local and may be ephemeral.  That is, the information used by a given component of cyberspace to make a local decision may not be available to other components and may not be kept after the decision is made.  For example, the user/address mapping information of protocols like Network Address Translation (NAT) is not generally retained when no longer needed.
  • Good faith effort – The design of cyberspace assumed component failure, but not component duplicity.  Security of cyberspace operations was not a requirement of the design.  Intermediary nodes may manipulate information in unanticipated ways.

The last property is, of course, directly related to the security of cyberspace in general and cyber-attacks in particular.

Cyber-Attacks

It is helpful to divide cyber-attacks into three different types, based on their objective and the US legal authorities that apply.

  • War – (US Title 10) Attacks to deceive, deny, disrupt, degrade or destroy.
  • Espionage – (US Title 50) Spying by a government to discover military and political secrets.
  • Crime – (US Title 18) Theft, fraud, or other criminal acts.

One of the difficulties in defending against cyber-attacks is that the tools, techniques, and procedures used to attack are the same regardless of the type of attack.  They differ only in their objective.  Although the technology is the same regardless of the type of attack, this paper will address only those attacks with the objective of war.  That is, attacks where the end objective is to deceive, deny, disrupt, degrade or destroy.

The Failure of Security Principles

There is a general agreement among computer security professionals that there are three fundamental principles of security in the cyber domain.  They are:

  • Confidentiality – Preventing unauthorized disclosure of information.
  • Integrity – Preventing unauthorized modification of information.
  • Availability – Assurance that information is available when required.

These three principles are truly fundamental in that one can construct all other principles from them.  For example, an unauthorized access to a system is irrelevant if it violates none of the three principles.  As soon as the entity engaged in the unauthorized access can “see” information, whether data or state information, for which it is not authorized, then there is a violation of the principle of confidentiality.  Likewise, if the entity can change or delete information or state, then there is a violation of the principle of integrity.  It is a sort of “shorthand” to think of confidentiality as the authority to read, integrity as the authority to write, and availability is the ability to do either as required.

Although the types of cyber-attacks may differ, all attacks within the cyber domain must subvert one or more of these three security principles to succeed.  Attacks with the objective of war generally have as a goal the subversion of integrity (distort or destroy) and/or the subversion of availability (delay or deny).  Espionage and crime generally have as a goal the subversion of confidentiality.

Attack Vectors

Regardless of the type of cyber-attack or its objective, the attack must be directed against a component of cyberspace.  Broadly speaking, there are four vectors for directing an attack against a cyberspace component.  The attack may be directed against a vulnerability in a component of the system.  That is, targeting an inherent flaw in the design of that component and exploiting it.  The adversary could direct an attack against a human in cyberspace by convincing them to commit an act that subverts one or more of the security principles, such as installing computer code of the adversaries choosing.  Typically, this involves some level of “social engineering” to subvert a trust relationship.  Perhaps the attack is directed against a component of the system that is configured in a way that allows an adversary to compromise one or more of the security principles.  There is not a flaw in the component per se; rather the component is not operated in a secure way.  Lastly, the adversary may specifically create a component that allows them to compromise one or more of the security principles and, through some subterfuge, have the user install the component in their infrastructure.  These vectors are addressed in more detail below.

Vulnerabilities

All but the most trivial of cyberspace components have a finite probability of having an underling design, implementation, manufacturing, or judgmental flaw.  A flaw, in this sense, is a construct that can respond to specific input in a manner that causes the violation of one or more of the security principles.  A flaw becomes a vulnerability only when it is possible for an adversary to supply that specific input to the component.

The degree to which a vulnerability is exploitable is a function of how difficult it is for adversary to supply the input to the component and under what conditions.  For example, if one must be a trusted user of the component in order to supply the input, the vulnerability would be deemed less exploitable than a vulnerability for which an arbitrary and anonymous user could supply the input.

Social Engineering

Social engineering refers to the attack technique whereby an attacker fools a trusted user of the component into supplying input that causes a violation of one or more of the security principles.  Social engineering is a failure of trust.  The targeted user is, typically through guile, led to believe that his or her actions were on behalf of someone they trusted.

Configuration

Cyberspace components must be “installed” in the cyber domain.  That is, they must be connected or positioned to use the communications paths and assigned whatever information needed to participate in the domain.  If this process is not done correctly, it may not be possible for the component to enforce the security principles – much like leaving the door to a secure space unlocked.  This becomes an attack vector if an adversary can use that component in a manner of their choosing.

Supply Chain

The supply chain refers to the complete provenance of the component, from design and construction, to shipment and receipt.  An adversary could intentionally introduce a flaw into a cyberspace component, anywhere within the supply chain of the component, so that an adversary could exploit the associated vulnerability at their choosing.  The flaw could be in either hardware or software.

Attack Surface

Cyberspace is a network of components.  These components include all of the sensors, processing systems, communications paths, input and output devices, and human operators that enable cyberspace.  Each component can be attacked.  The attack surface of a given component encapsulates all potential avenues of an attack against that component.[5]  It is the sum of all that an untrusted person or process may access.  It includes, but is not limited to user input fields, protocols, interfaces, and services.  All components in cyberspace have a nonzero probability of having vulnerabilities.  The greater the attack surface, the higher the probability that a system will be successfully attacked.  It is important to note that attack surface and risk are not the same.  A system with a large attack surface and many vulnerabilities may not have a vulnerability that leads to catastrophic failure, while a system with a small attack surface may have a single vulnerability that does.  Vulnerabilities are the vector of attack.  Risk is a function of the effects of attacks.

The concept of an attack surface applies to all technical components of the cyber domain, computers and telecommunications systems.  It also applies to the humans that operate and use the components of the cyber domain.  An untrusted person or process may directly access the human operators and users.  A simple email to an operator is an example of such access.

Technology and Attack Surfaces

The attack surface of a cyberspace component is the collection of parts of that component that can be accessed by an untrusted user or process.  It is the place where flaws in design or implementation may become vulnerabilities to be exploited.  If an untrusted user or process can never reach the component, then flaws, from a security perspective, are irrelevant.  It is the inherent property of a “von Neumann architecture” that allows a flaw to become a vulnerability – and all computational components of cyberspace are constructed on von Neumann architectures.

The term “von Neumann architecture” comes from John von Neumann, who authored two papers in 1945 and coauthor a third in 1946 that were the first to articulate the requirements for a general-purpose electronic computer.[6]  Virtually all computers to date make use of his original ideas.  The principal idea of von Neumann’s was, for a computer to be general purpose; both instructions and data should be stored in the same memory.  Instructions must be as changeable as the data upon which they act.  There is no distinction between them in memory.  Whatever the processor “believes” to be an instruction or to be data is treated as such.  Computers frequently treat instructions as data for the purposes of loading or relocating them.  If an attacker is able to load data into memory and force its execution, the attacker can cause the computer to behave in a manner of their choosing.  This is at the heart of a “buffer overrun.”

While buffer overruns are not the only way to exploit a system within the cyber domain, they are a commonly exploited vulnerability and illustrate well the point about von Neumann architectures and attack surfaces.  Frank Swiderski and Window Snyder give a good description of a buffer overrun in their book:

A buffer overrun occurs when an application copies more data to a memory region than it has previously allocated.  This causes data in contiguous memory to be overwritten.  An attacker can use a buffer overrun if he controls the data being copied to the memory.  This allows him to overwrite contiguous data with information under his control.  In some cases, such as on the stack, the contiguous memory can have program control flow information.  In other cases, it might simply have data that affects program state.  In any event, the attacker can change this data, causing the application to behave in an unintended manner.  This usually results in the attacker forcing the target application to run arbitrary machine code of his choosing.[7]

A buffer overrun is a failure of integrity.  The attacker is able to write instructions to memory and cause the computer to execute them.  Most attacks involve some variant on this theme.

Humans and Attack Surfaces

Though not often thought of as such, human operators in the cyber domain are an attack surface.  Like buffer overruns, they can be fooled into executing code selected by an attacker.  Their lack of awareness or naiveté is a vulnerability.  Training may reduce vulnerabilities, but there will always be attacks that no one has anticipated.

Carl Ellison devised the concept of a ceremony to analyze such attacks.[8]  A ceremony is a protocol analysis that includes all parts of the protocol including the humans involved.  Humans have state and perform state transitions just as does any component in the protocol.  When one uses ceremonies, one finds that many common, otherwise secure, protocols are not secure because they require humans to make decision for which they lack adequate information on which to base their decision.  For example, digital certificates, while cryptographically secure, create insecure ceremonies, as most humans do not have the information necessary (state) to know if a given certificate is appropriate for its use (state transition).[9]  Humans are frequently overwhelmed by the complexity of the space they are required to manage.  They lack the specific state knowledge to choose the correct state transition.

Human Failure or Logic Failure

For an attack to be successful in the cyber domain, the defense must fail in some respect.  The cyber domain is unlike the other domains where an adequate defense may be overwhelmed by mass or maneuver (in the traditional sense).  Even attacking the availability of some aspect of the defense with a high volume of information (such as a “Distributed Denial of Service” attack) is not an attack en masse in the traditional sense.  It need not be an attack by an opponent using resources superior in either quantity or quality.  It is simply a failure of the defense to adequately deal with the increase in information.  A correctly informed defense should be immune to variations in information flows.  Management of information flow is a standard property of cyberspace.

Defensive failure can occur in either the technology that makes up the cyber domain or the people that use it.  People fail when they make erroneous decisions that put the defense in jeopardy.  Technology fails when it performs in a way that is either unknown or unanticipated, putting the defense in jeopardy.

Successful Attacks within Cyberspace

From the point of view of the defender, there are only two reasons that attacks within the cyber domain succeed.  The weapon, or exploit, is either something novel, which is unknown to the defender, or it is something know, but against which the defender has failed to remediate.  To be truly novel, no part of the defense should recognize the weapon for what it is.  An exploit may be novel relative to the component under attack, but be recognized by some other component in the defensive system (for example, a security component), thus not actually be novel.  Truly novel weapons, by definition, are not detected due to the attack.  All other successful attacks are a failure to remediate – to act on past information.

In this aspect, war within the cyber domain is different from war within other domains.  Successful attacks in the cyber domain are always achieved by surprise, strategic or tactical, and not by mass or maneuver.  Surprise, in a military context, is to “strike the enemy at a time or place or in a manner for which he is unprepared.”[10]  The strategic mobility within the cyber domain gives the offense the ability to choose when and where to strike.  Novelty allows the offense to choose a manner of attack for which the defense is unprepared.  In the other domains of war, the weapon used is rarely a surprise to the defense.  In the cyber domain, if it is not a surprise then it is not likely to be successful.  Military surprise is the condition that results from the interaction of two components: time and defensive preparation.[11]  As noted earlier, the cyber dimension is temporal in its essence.  If the offense can attack nearly instantaneously, to any extent that the defense is unprepared, it has failed.

Novelty

A cyber weapon is novel if it uses tactics, techniques, or procedures unknown to the defender.  Frequently, this involves the use of one or more previously unknown vulnerabilities in the cyber systems of the defender.  The cost to search for such unknown vulnerabilities is the same whether done by the offense or the defense.  As systems mature, both offense and defense seek and expose such vulnerabilities – the defense exposes them through remediation, the offense through exploitation.  If the defense remediates not just the specific attack, but all attacks of the same methodology or class, then, over time, vulnerabilities become more costly to find and novel weapons more difficult to create.  Novelty, finding vulnerabilities, costs both the offense and the defense the same.

Remediation

Remediation renders a known weapon impotent.  If the defense is active and cleaver, then remediation could render all weapons of a class or methodology impotent.  That is, defense can render a certain class of future weapons impotent as well.  The work factor required to remediate a known weapon or class of weapons is less than that required to find the vulnerability that enables the weapon in the first place, as the defense does not need to evaluate all possible inputs to all possible states of all processes.  The nature of the weapon itself can guide the defense, reducing the possible search space and making the problem tractable.

Remediation need not involve the specific component attacked.  All that is required is the system, as a whole, recognize the weapon and be immune to its effects.  The defense has the option of pursuing remediation by whatever path is least expensive – directly correcting the vulnerability or specifically detecting and blunting the attack.  Remediation is a defensive option and is far cheaper than developing novelty.


[1] Manabrata Guha, Reimagining War in the 21st Century: from Clausewitz to Network-Centric Warfare (New York: Routledge, 2011), 102.

[2] Paul Baran, On Distributed Communications: I. Introduction to Distributed Communications Networks (Santa Monica, CA: Rand Corporation, August, 1964), http://www.rand.org/pubs/research_memoranda/RM3420.html (accessed June 3, 2012).

[3] Albert-Laszlo Barabasi and Eric Bonabeau, “Scale-Free Networks,” Scientific American 288, no. 5 (May, 2003): 50-59.

[4] William A. Owens, Lifting the Fog of War (Baltimore: The Johns Hopkins University Press, 2001), 98-102.

[5] Michael Howard and David LeBlanc, Writing Secure Code, 2nd ed. (Redmond, WA: Microsoft Press, 2003), 611-13.

[6] H. Norton Riley, “The von Neumann Architecture of Computer Systems” (Computer Science Department, California State Polytechnic University, Pomona, CA, September, 1987), http://www.csupomona.edu/…hnriley/www/VonN.html (accessed February 29, 2012).

[7] Frank Swiderski and Window Snyder, Threat Modeling (Redmond, WA: Microsoft Press, 2004), 5.

[8] Carl Ellison, “Ceremony Design and Analysis,” paper presented at Rump Session of Crypto 2005 Conference, Santa Barbara, CA, August 14-18, 2005), http://eprint.iacr.org/2007/399.pdf (accessed March 2, 2012).

[9] Peter Ryan et al., Modelling and Analysis of Security Protocols (London: Addison-Wesley Professional, 2000), 24-37.

[10] Robert Leonhard, The Principles of War for the Information Age (New York: Presidio Press, 2000), 182.

[11] Ibid., 183.

Leave a Comment ↓

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: