Innovation and Learning in Terrorist Organizations: Towards Adaptive Capacity and Resiliency Nancy K. Hayden Sandia National Laboratories [email protected] Center for International and Security Studies at Maryland [email protected] August 16, 2013 We have become more adept at disrupting terrorist networks; nevertheless, our terrorist adversaries continue to learn and adapt, posing an enduring threat to the security of America and its allies and partners. 2010 United States Quadrennial Defense Review Abstract The concept of terrorist organizations as complex adaptive systems (CAS) has generated an abundance of models focused on understanding the inherent structural strengths and weaknesses of the organizations with the ultimate goal of disruption and defeat. However, in-depth theoretical analyses combining first-principles of CAS to understanding terrorist organizations as dynamical systems remain few. Specifically, while most experts acknowledge the key role that innovation and learning play in providing terrorist organizations with the capacity to adapt, there is a paucity of systematic treatment of the topic of what influences innovation and learning – and the difference between the two - in these covert organizations. This paper reviews the organizing principles, behavior characteristics, and mechanisms of learning and innovation in complex adaptive systems; discusses how other authors have applied these principles to understanding terrorist organizations; and introduces the constraints imposed by the need for secrecy in these covert organizations. In doing so, I provide a theoretically grounded framework that combines understanding of innovation and learning within covert organizations from a system dynamics perspective with first principles of complex adaptive systems to predict under what conditions innovation is likely to occur within terrorist organizations. Historical evidence of terrorist organizations and their activities over more than thirty years supports the qualitative predictions of the framework. Introduction This paper is motivated by the observation that, while the consideration of terrorist organizations as complex adaptive systems (CAS) has become routine within the security community, the evidence suggests that the majority of terrorist organizations and their operations show surprisingly little of the type of innovation that is often characteristic of CAS. Key principles of system dynamics are reviewed in the first section of this paper to generate criteria for applying the paradigm to terrorist organizations. In the second section, a generalized conceptual framework for innovation and learning within CAS are presented. The third section brings these ideas together in a conceptual systems model for examining learning and innovation within terrorist organizations. The fourth section discusses empirical evidence in support of this model, next steps for further research, and broader implications for other types of covert organizations 1. The Basics A system is an internally organized whole, where elements are so intimately connected that they operate as one in relation to external conditions and other systems (Meadows & Wright, 2008). A set of objects or a collection of people is not a system unless in regular interactions resulting in system behavior as a whole. An important criterion for applying the CAS paradigm to terrorist organizations, therefore, is that entities within the terrorist organization must be in regular interactions that lead to system behavior as a whole. While this may seem obvious, many instances of terrorist activities do not meet this fundamental criterion. Some terrorist attacks are actions of “lone wolfs” such as the 2011 car bombing attack of Anders Breivik in Norway. Many others have no known organizational association. According to the Global Terrorism Database at the University of Maryland, approximately 7,600 of 98,000 terrorist events since 1970 are in the latter category(Global Terrorism Database, 2012). Additional criteria for applying the CAS paradigm to terrorist organizations derive from system properties. Systems can be linear or non-linear, open or closed, and simple or complex depending on the nature of the interactions between actors within the organization and between actors external to the organization (Bertalanffy, 1980; Laszlo, 2001; Legasto, Forrester, & Lyneis, 1980). Within terrorist organizations, these properties may change significantly over time, which in turn significantly impacts rates of learning and innovation. Behavioral models of terrorist organizations need to explicitly account for these state properties and how they change in response to the environment. A linear system is one in which one or more perturbations to parts of the system evoke a response of the system as a whole that is linearly proportional to the stimuli. Cause and effect are easy to observe, as big changes result in big and proportionate responses. In non-linear systems proportionality and summation no longer hold: small changes in initial conditions or interventions can result in massive changes to the system, and vice versa. Nonlinearity makes the relationship between causes and effects difficult to observe, which can be a problem when trying to validate models of innovation in complex adaptive systems, especially those which may be relatively closed, such as some covert terrorist organizations. A closed system is one that is fully self-contained, and does not interact with its environment. Many systems studied in physics are closed systems. The law of maximum entropy obtained in these systems dictates that they will always be moving in time towards increasing disorder, absent any outside forces. In contrast, open systems support ongoing exchanges of materials and information with the environment. This allows for negative entropy to be maximized; that is, for order to develop without external intervention, as CAS self-organize to find optimal positions in fitness landscapes (Kauffman, 1993) All else being equal then, both open and closed systems can have mechanisms that act in opposite directions to impact learning and innovation, depending on the structures that develop and whether they impede or amplify the inflow and transmission of information and resources. Open systems can interact with unorganized elements of the environment or with other systems. When interacting with other systems, one has a system of systems (SoS). Emergent properties will obtain from a SoS different than those of the constituent systems themselves. Complexity There are many different reference frames for conceptually differentiating simple systems from complex, drawing on analogies from thermodynamics, information theory, structural mechanics, and graph theory. Crutchfield has demonstrated that deterministic conceptions -- such as temperature, information density, and entropy -- are in reality different measurements of the underlying order, or randomness, in the system(Crutchfield, 2003). This is illustrated graphically in Figure 1. Figure 1 System complexity is a function of both structural organization of interactions between entities in a system and the randomness of the individual entities. The concept of complexity can be likened to the statistical concept of entropy. At one end of the spectrum in Figure 1, simple periodic processes with high order and low randomness have negative entropy and low structural complexity. At the other end, there is no order to cause and effect and all outcomes are equally likely in the short term. This is the regime of chaos. Complex systems arise between these extremes and are an amalgam of predictable and stochastic mechanisms. Bar-Yam describes the transition from simple to complex, and from complex to chaotic, with a quadratic equation to describe any system as a collection of interacting agents (Bar-Yam, 1997). f(s) = as (1-s), Eq. 1 where s is an infinite sequence of binary variables, and 0<a<4. In simple systems, 0<a<1 and all agents interact in the same and predictable manner. In complex systems, 1<a< 3, where multiple agents interactions change dynamically in fluctuating and combinatorial ways that follow simple rules (e.g., maximize utility, maintain likeness to neighbors). There is a bifurcation point between 3 and 4 where all order breaks down, however, and a chaotic system ensues. Rogers et al. al argue that the likelihood of innovation increases as one approaches this bifurcation point in a system, but decreases beyond it(Rogers, Medina, Rivera, & Wiley). Both conceptions of complexity depend on the structure and dynamics of the interactions (information and/or material flow) between the fundamental units, or agents, in the system. Using classical systems theory, one can describe the effects of these interactions on system properties as a series of different equations. Let Q (i=1, …,n) be the measure i of some property of n elements in a finite system. Then the change in Q over time is i given by solving the simultaneous set of equations: dQ =f (a Q , a Q , a Q , …..a Q ) 1 1 11 1 12 2 13 3 1n n dt dQ =f (a Q , a Q , a Q , …..a Q ) Eq. 2 2 2 21 1 22 2 23 3 2n n Eq 2. dt ………………………………… dQ =f (a Q , a Q , a Q , …..a Q ) n n n1 1 n2 2 n3 3 nn n dt System complexity is introduced by allowing self-organizing interaction between the elements. This results in a variety of models of cooperation or competition. In one such model, the predator-prey, the system is capable of reaching a quasi-equilibrium state that is regulated by the interaction between the two elements in mutual dependency. However, in other models of competition no such regulation occurs and the system may become unstable. Evolution and Adaptation, Innovation and Learning in Complex Systems Adaptation, evolution, learning and innovation are key features of complex adaptive systems(Bar-Yam, 1997; Bonabeau, Dorigo, & Theraulaz, 1999; Holland, 1995; Jantsch, 1980) that can be conceptualized as the response to feedback from, and interactions with, the environment (Crutchfield, 2003; Sterman, 2000). These behaviors are self-organizing mechanisms by which a system responds to disequilibrium states resulting from initial conditions, from internal drivers (such as competitive goal-seeking) that change resource utilization distributions and impact production/dissolution rates, and from external forces or shocks. Evolution is the process of natural selection of “accidents”, such as mutants, based on their ability to improve the overall fitness of the system relative to its goal (Jantsch, 1980). Evolution occurs over long periods of time through successive generations, as those with the mutation are more successful in surviving and repopulating themselves than those without. Co-evolution may occur, in which the existence of one element (such as a species) is tightly bound up with the existence of another. In the context of Eq 2, evolution is modeled as a gradual change in f (i=1,n)over successive generations due to i , higher regeneration rates of the mutant Q. i Adaptation through learning and innovative occurs on a much different time-scale than evolution. Both involve information exchange with the environment and with elements within the system. Learning is the process of modifying existing knowledge, behaviors, skills, values, or preferences. Learning involves synthesis of different types of information. Imitation occurs by mimicking the activities of others due to observed cause and effects of their actions; whereas repetition generates learning through feedback on one’s own actions. Learning can occur at the individual element level or at the system level. In the context of Eq 2, learning by element Q results in a change in its potential i contribution to all other elements of the system and to the system performance as a whole. Whether or not this occurs depends on the interaction functions f (i=1…n), and i reaction coefficients a of element Q with the rest of the system. System level learning ij i occurs when a previously unused element Q is adopted for use within the system for the i same purpose observed in other systems. In the context of Eq 2, this is likened to changing a reaction coefficient a from a zero to nonzero value for Q, keeping the ij i functional form of the use of Q the same as in the observed system. i Bonabeau et al (1999) explain learning as emergent collective intelligence within groups of simple agents among which decision rules based on autonomy and distributed functioning replace control, preprogramming and centralization. Through computational experiments, they showed that such systems perform sub-optimally on regular structures but perform well on complex structures. Innovation involves the incorporation of a previously unused element into the system, or the recombination of existing elements in new ways (Holland, 1995). Specialized elements are recombined and utilized differently, as reflected in changes to both the functional forms of Eq 2 and the reaction coefficients. As will be discussed in a later section, CAS are postulated to provide optimal conditions for innovation to emerge when channels for information exchange exist with diverse external communities, and the opportunities to exploit new information are not constrained by the internal structure of the system. Even so, the process of emergence of an innovation is not yet well understood. At a societal level, terrorism may be viewed as emergent phenomena presenting a solution to an otherwise intractable problem to certain subsystems that perceive themselves as disadvantaged and otherwise disempowered within a greater system (Hayden, 2006). As self-organized subsystems, organizations employing terrorist operations seek to create disequilibrium and change the basic functions and distributions of resources within the system to advantage themselves and others. The fact that terrorist organizations have tended to be conservative in their operations, and have not exhibited a propensity to use weapons of mass destruction, in spite of rhetoric threatening to do so, presents a puzzle to the communities of terrorism research scholars and to the national security community alike: when do terrorist organizations use learning and innovation to achieve their goals, and why do we not see more of it? This paper uses system dynamics to explore this question. Networks, Evolution and Adaptation, Innovation and Learning A key hypothesis is that network structures that evolve from system dynamics influence and constrain the processes of evolution, adaptation, innovation and learning in terrorist organizations through information exchange mechanisms. Random networks in which there is equal probability, p, of a connection between any two nodes, result in short average and overall path lengths, providing robust and efficient means of information exchange. However, random graphs evolve slowly, and it is difficult for outliers (where many innovations occur) to have much of an impact on the rest of the network. Even so, there is a critical threshold value of p, related to the number of nodes, n, in the network beyond which a cascade effect will generate a single large, or even “giant” component (Figure 2). In this case, innovations developed by outliers rapidly spread through the network. Research into collaboration networks validates the existence of random networks with giant components among diverse communities of social actors, such as scientists, movie actors, and board directors(Newman, Watts, & Strogatz, 2002). Empirical data suggests the existence of giant components in several “dark” networks, e.g., Islamic jihadists, drug rings, and criminal organizations(Xu & Chen, 2008). Figure 2 Enros-Renyi Random Network Figure 3 Scale-Free Network Scale-free networks (Figure 3) are those in which the distribution of connections within the network follows the power law: P(k) = ck- Eq 3. where P(k) is the fraction of nodes in the network having k connections to other nodes, c is a normalization constant, and is a parameter with values typically between 2 and 3. Preferential attachment and evolutionary processes are mechanisms that can generate scale-free networks. Computer simulations have shown that scale-free networks are able to evolve to perform new functions more rapidly than random graphs with equal probability of connections. Scale-free networks are resilient to accidental, random failures. However, they are more vulnerable to directed attacks than random networks. Theoretically, learning and innovation in scale-free networks should exhibit behavior patterns indicative of diffusion and natural evolution mechanisms. While scale-free networks are one of the most ubiquitous in natural, social, and technological systems, they are not prevalent among terrorist organizations. Small world networks (Figure 4) are characterized by higher clustering coefficients than random graphs while maintaining the same median shortest path length for the overall network. Figure 4 Small World Network Figure 5 Core Periphery Network Figure 6 Ring Network The clustering coefficient measures the degree to which all nodes within a neighborhood are connected to all other nodes in that neighborhood (where a neighborhood of a node j, is comprised of its immediately connected neighbors). The four “weak links” connecting neighborhoods in Figure 4 are critical to maintaining a short average path length. Like scale-free networks, small world networks are ubiquitous in self-organizing natural systems. As one might intuitively expect, learning and innovation in small world occurs in spurts, through a type of punctuated equilibrium process that is highly vulnerable to the existence of the weak links(Filk & Muller; Gould & Eldrige, 1977). Less obvious is the mechanism for formation and reconstitution of these weak links. Many of the Islamic terrorist organizations today exhibit small-world network properties. As with scale-free and small-world networks, the core-periphery network exhibits a high degree of clustering. However, as shown in Figure 5, the clustering is confined to a densely connected core surrounded by sparsely connected peripheral nodes. Core- periphery networks evolve as elements on the periphery join the core to exploit economies of scale, or as cores expand into outlying neighborhoods for resource exploitation. Political examples are band wagoning and colonization, respectively. Social examples are found in friendship networks, voting networks, transportation networks(Rombach, Porter, Fowler, & Mucha, 2012). Information diffusion and virus propagation on many on-line networks exhibit core-periphery structures(Gomez- Rodriguez, Leskovec, & Krause, 2010). Terrorist organizations that enjoy state sponsorship, such as Hezbollah, are more likely to evolve into core-periphery networks. Recent studies on the spread of complex contagions suggest that core-periphery structures can have much higher transmission rates than small worlds. A complex contagion is one requiring multiple exposures for the contagion to spread(Damon & Macy, 2007). High-risk contagions – such as the purchase of an expensive piece of equipment, the participation in a risky political action, or the adoption of an unproven technology – require multiple “social proofs”. In small world networks, the linkages between community structures are long (which increases effective transmission rates) but “thin”. The thinness of these linkages slows the spread of risky contagion. In contrast, the multiple short paths between nodes in overlapping community structures build many “wide” bridges in the core-periphery network, creating high effective transmission rates(Reid & Hurley, 2011). This has significant implications for state-sponsored terrorist organizations, which enjoy both the resources and the network structure to support innovation against adversaries. Ring networks (Figure 6) are simple structures in which each node connects to exactly two other nodes, forming a single continuous pathway for transmission events through each node. Obviously, these networks are highly vulnerable to the removal of any one of the links. Typically, this vulnerability is managed through redundancies - by sending simultaneous, duplicative transmissions in opposite directions and by utilizing secondary, overlapping and counter-rotating rings. The idea is that not all transmissions will get through all rings, but that the probability of complete system failure is low, as every node has the information necessary to be transmitted and it does not require a central node to manage the system. For networks of small numbers, ring networks have been shown to provide optimal configuration to protect secrecy while maintaining operational efficiency, if not robustness, but do not facilitate learning and innovation(Lindelauf, Borm, & Hamers, 2009). This finding has implications for small covert terrorist cells, where n may be less than ten, or terrorist organizations in start-up stages, and is consistent with the large numbers of short lived terrorist organizations. For networks of more moderate size between 20 and 40, the “windmill” and “reinforced wheel” networks shown in Figure 7 have been shown to be most efficient for the achieving the dual objectives of secrecy and efficiency(Lindelauf et al., 2009). The network topologies in Figure 7 are variations on the familiar hub-and-spoke pattern of many distribution systems. These structures were generated in computer experiments to optimize network structures for dual objectives of secrecy and information efficiency in covert networks discussed in the following section. Hub-and-spoke networks evolve naturally to optimize Figure 7 Windmill and Reinforced Wheel Networks self-organizing distribution systems. Complicated operations that are identically required by every node can be carried out at the hub. Drawbacks include the longer path lengths required for distribution to every node, and the inflexibility of the hub to adapt quickly to changing environmental conditions, constituting a single point of failure for the system. In spite of these drawbacks, the hub-and-spoke paradigm remains ubiquitous in systems that can realize high improvements in efficiencies with centralization of operations. 2. A Framework for Innovation and Learning in Systems The study of innovation diffusion within CAS is a burgeoning academic field with applications in diverse fields, integrating the pioneering work of Everett Rogers(Rogers et al.) with developing understanding of CAS. While it is beyond the scope of this paper to review this literature, the Cynefin framework proposed by Kurtz and Snowden for innovation management at IBM is particularly relevant (Kurtz & Snowden, 2003). This framework provides an operative context for making sense of a situation and the possibility of innovation based on the system state. Namely, one must first establish whether or not the system is in a state of order, complexity, or disorder before one can study or affect innovation processes within it(Snowden & Boone, 2007). This is relevant to organizations concerned with discovering when, why, and how terrorist organizations learn and innovate. Each domain in the framework represents a different system state of order, resulting in different behavior patterns and requiring different actions to understand and manage the processes occurring within them. The simple and complicated domains both exhibit order where cause-effect relationships can be known. This ability to perceive cause and effect is an essential feedback mechanism for learning the “right” answers to problems or discovering optimal solutions through adaptive goal seeking, presuming that those solutions are known to exist. Complex and chaotic domains present no opportunity for such deterministic resolution of cause and effect, and paths forward emerge holistically following innovative leaders. Systems theory teaches that, all else being equal, closed systems will move towards increasing disorder, absent intervention. In contrast, open systems will move towards increasing order. Thus, one should expect that within a system of systems (SoS) there might be dynamic movement between these subsystem domains, even while maintaining equilibrium at the system level. Indeed, studies of many organizational, social, biological, and physical systems bear this out. The behavior patterns of these movements between domains, in turn, are domain dependent, as postulated by the Cynefin framework. Kurtz and Snowden postulate that different characteristic network structures will be associated with each of the domains. Simple domain networks provide the most efficient structures for learning, through the process of sensing the environmental state, categorizing the information received according to previous knowledge, and responding accordingly. However, these responses will obviously not be sensitive to changing environmental conditions. In the complicated domain, the network structures are highly connected with a central hub, as in random graphs with giant components. Here, cause and effect is separated in time, and discoverable along some finite possibility paths with some analysis. Hierarchical networks and learning through incremental improvements is characteristic of information transmission between ordered states. The most likely adaptation mechanism in this case should logically be natural evolution or systematic trial-and-error, and innovation is unlikely. This is the path followed by organizations that are low-risk either by structural design or culture, such as the Irish Republican Army (IRA). In both complex and chaotic domains cause and effect are not knowable a priori and underlying structure constrains the available system response within some bounded set of possible outcomes. It is necessary to probe the system to discover how the structure is likely to respond. In chaotic systems, observed responses can be the result of many different initiators. In this domain, sense-making requires that one takes action, senses the response and adjusts accordingly in a continuous, iterative pattern of actions and reactions. In contrast, adaptive learning and/or innovation transfer is highly likely to occur within and between systems in the complicated domain and the complex domain. Innovation will most likely emerge within the complex subsystem. The complicated system sense explores the complex domain for new and novel ideas. Since cause and effect can be determined with in the complex domain, these ideas can be analyzed for their potential effect within the complicated domain before adoption. This is the process followed by organizations that provide an internal entrepreneurial unit with self-organizing freedoms (e.g., complexity) to foster discovery. Discoveries in the complex domain are monitored,
Description: