From: jimruttshow8596

Dave Snowden, founder and chief scientific officer of Cognitive Edge, is known for pioneering a science-based approach to complex issues in government and industry, drawing on anthropology, neuroscience, and complex adaptive systems theory [00:00:31]. His work is international, covering strategy, organizational design, and decision-making [00:00:26]. Snowden is best known as the inventor of the Cynefin framework, pronounced “can-EV-in” [00:01:16].

Evolution of Understanding Organizational Complexity

Snowden’s path to complexity science was influenced by his early work at IBM in knowledge management [00:01:40]. He observed that early knowledge management approaches often focused too heavily on technology, leading to an over-codification of information that is not easily codified [00:02:01], [00:03:08]. This focus on “codification” meant trying to put everything into databases [00:02:05].

Snowden and others argued that decision support was far more complex, requiring consideration of cognitive neuroscience and how people actually make decisions [00:02:15]. This perspective soon evolved into work on narrative and complexity theory [00:02:27], leading to involvement in DARPA programs focused on weak signal detection and decision-making in complex policy environments [00:02:33].

Rejecting Naïve Newtonianism

A significant problem in business and government persists: a “naïve Newtonianism” where people believe that with enough data, the future can be predicted [00:03:22], [00:03:34]. This linear approach to causality assumes that defining the input will define the output, or that future states can be forecasted or backcasted [00:03:46].

Complexity theory, in contrast to deterministic chaos, is described as the science for uncertainty [00:04:04]. It posits that one can understand the present and map coherent pathways from it, but cannot define a specific outcome [00:04:18]. While deterministic chaos might be a “baby first step” to discard naïve Newtonianism, it can lead people to mistakenly believe agent-based models or AI can deal with highly interconnected systems [00:05:10].

Snowden’s influences in complexity theory include Stuart Kauffman, Ilya Prigogine, Brian Arthur, and Ralph Stacey [00:05:49]. A key insight is that the study of complexity in human systems differs from that in natural systems (like termite nests) because humans are not ants [00:06:33], requiring a more transdisciplinary approach incorporating cognitive neuroscience [00:06:18].

The Cynefin Framework

The Cynefin framework divides systems into three fundamental types: ordered, complex, and chaotic, with a “phase shift” between them rather than a gradation [00:08:02]. A central “disorder” domain represents not knowing which system one is in [00:11:51].

Ordered Systems

Ordered systems have a high level of constraint, making everything predictable [00:08:27]. Human beings use constraints to produce predictability, like driving on a specific side of the road [00:08:36].

  1. Obvious (Simple):
    • Cause and effect relationships are self-evident [00:08:49].
    • Domain of “best practice” – a single right way of doing things [00:08:56].
    • Process: Sense, Categorize, Respond [00:09:00].
    • Characterized by rigid constraints [00:09:01].
  2. Complicated:
    • Cause and effect are not self-evident but can be discovered by experts through investigation [00:09:05].
    • There is a right answer, discoverable within a range of possibility [00:09:20].
    • Domain of “good practice” [00:09:29].
    • Process: Sense, Analyze, Respond [00:09:17].
    • Example: A medical practitioner’s flexibility in patient decisions [00:09:34].

“A complicated system is the sum of its parts so you can solve problems by breaking things down and solving them separately” [00:13:30]. Such systems tend to be engineered and their components are generally not antagonistically adaptive [00:14:00], [00:14:38].

Complex Systems

Complex systems have “enabling constraints,” where everything is interconnected, but connections are not fully known [00:10:42]. The concept of “dark constraint” implies seeing the impact of something without knowing its origin [00:10:53].

  • Characteristics:
    • The only way to understand a complex adaptive system is to probe and experiment within it [00:11:04].
    • Experiments must be run in parallel to change the dynamics of the space, allowing solutions to emerge [00:11:11].
    • If evidence supports conflicting hypotheses of action that cannot be resolved in time, the situation is complex [00:11:19].
    • Instead of resolving conflicts, safe-to-fail micro-experiments are constructed around each coherent hypothesis [00:11:34].

“In a complex system the properties of the whole are the result of interaction between the parts and their linkages and the constraints… how things connect is more important than what they are” [00:13:36]. The emergent pattern cannot be decomposed to original parts [00:13:52].

Chaotic Systems

In human systems, chaos is the state of absence of constraints [00:33:05]. Unlike in physics where it’s a low energy gradient, in human systems, chaos is a high energy gradient because constraints happen quickly and naturally [00:33:09].

  • Characteristics:
    • Chaos is always temporary [00:33:01].
    • Organizations should avoid falling into chaos accidentally; instead, they should enter it deliberately when needed [00:33:37].
    • Process: Act, Sense, Respond [00:10:20].
    • If ordered systems are “over-constrained,” they can catastrophically fragment into chaos [00:10:04].
    • Example: 9/11 was a brief chaotic state [00:31:47].

To deal with a truly chaotic system, one must create constraints very quickly [00:34:05]. This requires building distributed decision support systems and networks for ordinary purposes that can be activated for extraordinary needs before a crisis hits [00:34:08].

Disorder

The central domain of disorder represents not knowing which of the other systems one is in [00:11:51]. One might enter it accidentally or deliberately [00:11:58]. This state can lead to inauthenticity, where natural tendencies (e.g., towards bureaucracy or emergence) are applied inappropriately [00:12:02].

Key Principles from Cynefin

The essence of Cynefin is that context is key [00:12:19]. It was developed to combat management fads that claimed universal applicability, when in fact they only work in specific contexts [00:12:23]. The framework helps decide what context an organization is in before choosing a method [00:12:38].

Systems and Causality

  • Complicated vs. Complex: A complicated system is the sum of its parts; problems can be solved by breaking things down [00:13:30]. In a complex system, properties of the whole result from interaction between parts and their linkages, meaning “how things connect is more important than what they are” [00:13:36].
  • Antagonistically Adaptive: Complicated systems generally assume components are not “antagonistically adaptive” (e.g., a carburetor won’t maliciously stop an engine) [00:14:09]. In a complex adaptive system, something beneficial one day can become harmful another, like symbiosis evolving from parasitism [00:14:46].
  • Nested Systems: While a complicated system can be embedded in a complex system (e.g., a factory in a marketplace) [00:15:51], complex systems can also be embedded in complicated ones (e.g., micro-complexity dealing with an overarching political framework) [00:16:47].

Application of Cynefin in Organizational Context

  • Management Fads: The Cynefin framework directly challenges management fads like “business process reengineering” or “agile” that claim universal solutions [01:12:07].
  • Contextual Appropriateness: Decisions and leadership styles must be appropriate to the context; servant leadership doesn’t work universally, nor does draconian [00:45:41].
  • Attitudes as Early Indicators: Measuring employee attitudes (e.g., to cybersecurity) provides “early weak signals of a dispositional state,” which is more valuable than trying to imply causality or define an outcome [00:46:04].
  • Site-Casting: Instead of forecasting (projecting forward) or backcasting (closing gaps to a desired future), Snowden’s work focuses on “site-casting” – probing the present to see what’s possible before risking the future [01:09:06], [01:15:15].

Dealing with Complexity

Apex Predator Theory

When anything becomes commoditized, frequency and variety in a system are lost, making the system perverse and ripe for new entities to come into play, like “apex predators” [00:17:27].

  • Business Example: IBM’s dominance in early computing due to “exaptation” (repurposing technology) led to a first-mover advantage [00:17:43]. However, it failed to see hardware becoming a commodity, leading to near-catastrophic failure [00:17:54].
  • Political Example: Neoliberalism homogenized the political left and right, leading to a lack of perceived choice and lowering the energy cost for extremists to gain influence [00:18:18]. The danger is that the new predators who stabilize a new ecosystem become “impossible to disrupt for a significant period of time” [00:18:40].

Managing Dissent and Diversity

Organizations should shift from homogeneity (e.g., everyone having the same values or goals) to “coherent heterogeneity” [00:47:58]. This means cultivating differences that can come together in various ways, ensuring resilience [00:48:14].

  • Identifying Outliers: Using attitude mapping, organizations can measure cognitive and behavioral diversity and identify outlier groups that deserve attention, rather than being drowned out by middle management [00:48:45].
  • Coherence Test: When conflicting views or “dissonant groups” exist, the approach is to let clusters of coherent ideas (even outlier ones) run “small safe-to-fail experiments” to see what works [00:49:30]. This relies on objective, quantifiable measures of coherence to determine which dissidents are “worth talking to” [00:50:24].
  • Right Amount of Diversity: The optimal level of diversity is situationally dependent [00:52:21]. In a stable ecosystem, less diversity is needed (exploitation) [00:52:27]. When the system destabilizes, increased diversity (exploration) is needed quickly [00:52:29]. AI can be used to trigger when to switch between these modes [00:52:54].

The Role of Narrative

Narrative is central to Snowden’s work because it captures tacit knowledge: “we always know more than we can say,” and “we can always say more than we can write” [00:58:31].

  • Micro-Narratives: The “watercooler stories,” school gate stories, or supermarket checkout queue stories reveal people’s true attitudes more than formal questionnaires [00:58:43].
  • Self-Interpretation: Critically, people should have the power to self-interpret their own narratives rather than having them interpreted by text search, algorithms, or experts [00:59:04]. This allows for scaling to high volumes quickly [00:59:13].
  • Ambiguity: Narrative carries ambiguity, serving as a “halfway house” between explicit data (like a map user) and tacit knowledge (like a black cab driver’s instinctive navigation) [00:59:28].
  • Narrative-Enhanced Doctrine: HTML links to real stories within best practice documents can provide richer context and help people interpret data [00:59:50].

SenseMaker Software

SenseMaker is a software platform designed to gather and analyze day-to-day micro-narratives [00:59:02].

  • How it Works:
    1. Non-Hypothesis Question: Instead of direct questions (like an employee satisfaction survey), it asks open-ended questions like “What story would you tell your best friend if they were offered a job in your workplace?” [01:02:02].
    2. Self-Interpretation: Individuals record their story (spoken, picture, typed) and then self-interpret it onto a series of “triangles” (or triads) [01:02:10]. These triangles balance off three positive qualities (e.g., “altruistic, assertive, analytical”) [01:02:20].
    3. Metadata Generation: Each placement of a dot on a triangle adds metadata points (e.g., six triangles add 18 metadata points) to the original narrative [01:02:45].
    4. Analysis: The statistical data, along with the original narrative, is analyzed to produce “fitness landscapes” that show statistical patterns and provide an explanation of what they mean [01:02:50].
  • Benefits:
    • Triggers deeper thinking by not expecting a specific answer [01:02:40].
    • Provides quantitative data from qualitative input (“human metadata”) [00:25:14].
    • Allows for rapid turnaround of results (in hours or instantly), unlike traditional surveys [01:05:01].
    • Can be used to map organizational culture, assess attitudes, or even for political polling to find shared values [00:25:35], [01:03:29].
    • Empowers individuals by allowing them to interpret their own experiences, reducing reliance on expert mediation [01:04:30].

Other Tools and Concepts

  • Agent-Based Modeling: While useful for systems with single agency and rules, agent-based models are problematic in human systems with multiple identities and rule-based decisions [00:27:25]. They can provide insight but not predictive elements [00:28:28]. The statistics on the ensemble of trajectories can be interesting for understanding system characteristics (e.g., Gaussian vs. fat-tailed spaces) [00:29:05].
  • Gaussian Pareto Distinction: In Pareto-distributed (fat-tailed) systems, the best one can do is trigger human beings to a “heightened state of alert” [00:29:37]. This is used in counterterrorism to signal when an outrage is more likely, rather than predicting it [00:29:45].
  • Human Agency and AI: The AI industry often tries to reduce human agency, whereas the focus should be on increasing it [00:54:21]. Humans are better at learning faster, abstract thinking, and dealing with novel or unexpected connections [00:38:50], [00:36:25].
  • Ethics and Aesthetics in AI: Engineers, especially in AI, should be trained in ethics and aesthetics [00:40:00]. Aesthetics, rooted in human evolution, is about abstractions, which allows for rapid “exaptive thinking” (repurposing things) and fosters higher empathy [00:40:12].
  • Red Teams: These are valuable for challenging assumptions, especially when truly independent [00:57:07]. Snowden uses “ritual dissent” to fragment into multiple small micro-red teams [00:57:18].

Complexity in Social and Political Spheres

Snowden’s work is increasingly focused on democracy, education, and creating a more humane society, applying a natural science approach to social systems [01:07:11].

  • Critique of Social Science: Much of social science is seen as self-referential, focused on postal production, and prone to becoming “politics by other means” [01:10:13], [01:10:36]. If social science claims predictive capacity like management sciences, it becomes a “pseudoscience” [01:10:49].
  • Natural Science as Constraint: Natural science provides a “tight focus on what’s actually possible” in social systems, acting as a constraint [01:10:58], [01:12:02].
    • Example: Knowing about “inattentional blindness” (where people miss obvious things if not looking for them) means systems must be designed to find the 17% who do see, rather than trying to train everyone not to have the blindness [01:11:15].
  • Current Projects: This includes projects on making global warming a micro-issue, understanding abuse in partnerships, and creating new approaches to design thinking based on complexity [01:06:51], [01:07:27].

Snowden highlights the urgency of applying these insights, noting a significant increase in the perceived chance of social collapse in his lifetime [01:08:44].