From: jimruttshow8596
The work of Dave Snowden, founder and chief scientific officer of Cognitive Edge, extensively covers complex issues related to strategy, organizational design, and decision making [00:00:26]. He pioneered a science-based approach drawing on anthropology, neuroscience, and complex adaptive systems theory [00:00:31].
Critique of Traditional Approaches to Decision Making
Snowden criticizes early knowledge management efforts at companies like IBM in the 1990s, which often focused heavily on technology and codification [00:02:01]. These attempts were described as “naive Newtonianism,” a belief that with all data about position and velocity, the future could be predicted [00:03:22]. This approach often took a linear view of causality, assuming inputs could define outputs or that future states could be forecasted [00:03:46]. This reveals a lack of understanding of concepts such as deterministic chaos [00:03:38].
Traditional management science has been criticized for producing a series of fads, each claiming to have universal solutions, but lacking a scientific basis and failing to persist [01:12:07]. Many management tools, like Myers-Briggs, are considered pseudoscience but continue to be used due to profitability for consultants [01:12:30].
The Cynefin Framework
Snowden is best known as the inventor of the Cynefin framework, pronounced “Can-EV-in” [00:01:16, 00:01:31]. This framework is based on a fundamental division into three types of systems: ordered, complex, and chaotic, with a phase shift between them [00:08:02]. It emphasizes that context is key in deciding which method to use for decision making [01:16:19].
Ordered Systems
Ordered systems have a very high level of constraint, making everything predictable [00:08:27]. They are divided into:
- Obvious: Where the relationship between cause and effect is self-evident, understandable by everyone [00:08:49]. This is the domain of “best practice,” where one senses, categorizes, and responds with rigid constraints [00:08:56]. An example is driving on a specific side of the road [00:08:33].
- Complicated: For experts, cause and effect may be obvious, but for the decision-maker, it requires investigation and expertise [00:09:07]. There is a “right answer” that can be discovered, often within a range of possibility [00:09:19]. This is the domain of “good practice” [00:09:31]. An example is a medical practitioner’s flexibility in patient decisions [00:09:34].
Over-constraining an ordered system, such as through overly rigid expense systems, can lead to it breaking or fragmenting into chaos – a “catastrophic fold” [01:00:10, 01:00:11].
Complex Systems
Complex systems have “enabling constraints,” where everything is interconnected, but the connections aren’t fully known [01:00:41, 01:00:46]. A “dark constraint” is a concept where the impact of something is visible, but its origin is not [01:00:51]. In a complex adaptive system, understanding requires probing and experimenting, critically, in parallel [01:01:04, 01:01:11].
If evidence supports conflicting hypotheses for action and cannot be resolved within the decision timeframe, the situation is complex [01:01:19, 01:01:28]. In such cases, one should not try to resolve the conflict but instead create “safe-to-fail micro-experiments” around each coherent hypothesis, running them in parallel to allow solutions to emerge [01:01:34, 01:01:36, 01:01:43].
Chaotic Systems
In the Cynefin framework, chaos is defined as the absence of constraints [01:03:05]. It is typically temporary [01:03:01, 01:03:34]. In human systems, chaos represents a high energy gradient, and constraints emerge quickly [01:03:12, 01:03:17]. Examples include industries collapsing overnight due to “competence-induced failure” [01:02:28, 01:03:31]. The 9/11 attacks were briefly a chaotic state [01:03:50].
To deal with a truly chaotic system, the guidance is to create constraints [01:03:56, 01:04:05, 01:04:07]. This requires having distributed networks of people from multiple backgrounds built before a crisis [01:03:32, 01:04:37, 01:05:19]. Humans are naturally good in chaos and have evolved for collective decision making [01:03:39, 01:03:41, 01:03:44].
Disorder
The final domain is “disorder,” the central space in Cynefin [01:01:47]. It represents a state of not knowing which system type (ordered, complex, or chaotic) one is in [01:01:53]. Entering it can be accidental or deliberate, but it often indicates an “inauthenticity,” such as imposing order when it’s inappropriate or failing to impose order when it is needed [01:02:00, 01:02:02].
Distinction Between Complicated and Complex
A key distinction in systems thinking is between complicated and complex:
- Complicated systems are the sum of their parts; problems can be solved by breaking things down [01:13:30]. Components are generally not antagonistically adaptive and are relatively static [01:14:09, 01:14:14, 01:14:28]. They are often engineered [01:14:39].
- Complex systems have properties of the whole resulting from interactions between parts, their linkages, and constraints [01:13:36]. How things connect is often more important than what they are [01:14:47]. Emergent patterns cannot be decomposed into original parts [01:14:52]. Components can be symbiotically or antagonistically adaptive [01:14:49, 01:14:55].
Complicated systems are often embedded within complex systems (e.g., a factory in a marketplace) [01:15:51]. However, complex systems can also be embedded in complicated ones, such as micro-complex dynamics within an overarching political framework [01:16:48, 01:17:05].
Tools and Approaches for Complex Systems
Narrative and Micro-Narratives
Narrative is central to understanding human systems, as people “always know more than they can say,” and “can always say more than they can write” [01:58:31, 01:58:35]. Watercooler stories and day-to-day micro-narratives capture true attitudes, unlike focus groups or questionnaires [01:58:43, 01:59:02].
Crucially, individuals should self-interpret their own narratives or “micro-observations” rather than relying on algorithmic or expert interpretation [01:59:04, 01:59:10, 02:00:14]. This allows for scaling to very high volumes of data [01:59:13]. Narrative carries ambiguity and sits between explicit data (like a map) and tacit knowledge (like a London black cab driver’s knowledge) [01:59:28, 01:59:30]. Narrative-enhanced doctrine, where real stories are linked into best practice documents, provides richer context [01:59:50, 01:59:56].
SenseMaker Software
SenseMaker is a software platform designed to gather and interpret micro-narratives [02:00:15, 02:01:15]. Instead of traditional surveys, SenseMaker asks non-hypothesis questions (e.g., “What story would you tell your best friend if they were offered a job in your workplace?“) [02:02:02]. Users self-interpret their stories by positioning them on a series of triangles, balancing positive qualities (e.g., altruistic, assertive, analytical) [02:02:08, 02:02:17, 02:02:24].
This process triggers a shift from “fast thinking” to “slow thinking” as users don’t know the expected answer, forcing deeper thought [02:02:31]. Each self-interpretation adds 18 metadata points to the original narrative, which are then analyzed [02:02:45, 02:02:48]. The original narrative is carried with the statistical data to explain patterns [02:02:56].
SenseMaker enables rapid turnaround of results (instantly in some cases) because of self-scoring [02:04:59, 02:05:29]. It can be used for:
- Mapping cultural attitudes within organizations (e.g., cybersecurity attitudes) [00:52:50, 00:53:35, 02:04:06].
- Distributed decision support by showing dominant views and critical outlier views [00:52:50, 02:05:56].
- Political polling or creating de novo political parties by identifying shared values [02:03:32, 02:03:52, 02:04:09].
- Empowering clients in social work to tell and interpret their own stories [02:04:25].
- Real-time 360-degree feedback for leaders, focusing on descriptive rather than evaluative feedback [02:05:44, 02:05:54].
Critique of Agent-Based Models
Agent-based models and simulations are useful if there is single agency and clear rules [02:27:25]. However, most human systems involve multiple identities and patterns not solely based on rules [02:27:31]. There is a risk of confusing simulation with prediction [02:27:39]. While agent-based models can offer insight and clues about aspects of a system, they don’t provide the predictive element often expected [02:28:24, 02:28:30]. Murray Gell-Mann famously stated that “the only valid model of a human system is a system itself” [02:28:18].
However, the statistics on the ensemble of trajectories from agent-based models can be useful for understanding if a system operates in a Gaussian or fat-tailed (extremistan) space [02:29:00, 02:29:05]. Managing in these different spaces requires different approaches [02:29:20].
Anticipatory Triggers
In a Pareto distribution (fat-tailed space), the best approach is to trigger human beings to a heightened state of alert to look at something, rather than attempting to predict outcomes [02:29:37, 02:29:40]. This is done by building training datasets based on past fragmented observations (e.g., counter-terrorism examples) [02:53:10, 02:53:14]. By involving executives in constructing these datasets, they trust the AI system more as it’s not a “black box” [02:53:25, 02:53:27]. This “traceability” and understanding of the mechanism is crucial for decision-makers [02:53:37].
Leadership and Management in Complex Contexts
The Cynefin framework suggests there isn’t one universal leadership style [02:45:41]. Leaders need to be multifaceted and distribute leadership [02:45:50, 02:45:53]. Attitudes are critical as they are early indicators of dispositional states [02:45:57, 02:46:54]. Measuring attitudes to things like cybersecurity allows systems to prompt for “more stories like this, fewer stories like that” in real-time [02:46:06, 02:46:33].
Encouraging Dissent and Diversity
Homogeneity, often promoted by concepts like “learning organization” or Agile, destroys system resilience [02:47:58, 02:48:03, 02:48:09]. Instead, “coherent heterogeneity” is needed, where differences can come together in various ways [02:48:14, 02:48:16, 02:49:03]. Attitude mapping can identify outlier groups that deserve attention, preventing them from being drowned out [02:48:45, 02:49:51].
To manage dissent without being overwhelmed by “crankery” or “horseshit,” one can use a “shallow dive into chaos” [02:50:47, 02:50:50]. This involves presenting a situation to a workforce, allowing them to interpret it (generating multiple views and clusters), and then allowing coherent clusters to run small, safe-to-fail experiments [02:50:52, 02:51:31, 02:51:39, 02:51:41, 02:51:43]. This statistical mapping identifies coherent ideas worth exploring, even if they are different, from those that are nonsense [02:51:55, 02:52:01]. The test for coherence is a key concept [02:52:02].
The amount of diversity needed is situationally dependent: more is required when a system is destabilizing, less in stable ecosystems [02:52:23, 02:52:27]. This relates to the exploitation (stable times) vs. exploration (unstable times) dynamic in evolutionary computation [02:52:43, 02:52:45].
Critique of Management Fads
Many management fads, such as business process re-engineering or Agile, claim to be universal solutions but only work in specific contexts [02:25:27, 02:29:23, 02:29:30, 01:13:09, 01:13:11]. This aligns with the “no free lunch theorem,” which states there is no single best algorithm; one must understand the domain first [01:13:19, 01:13:28, 01:13:31]. Applying natural science principles, such as understanding cognitive biases like “inattentional blindness,” can serve as a “constraint on what you can do in social systems” [01:10:58, 01:11:12, 01:11:31].
Future Directions
Snowden’s current focus is on applying a natural science approach to social systems, shifting away from case-based inductive methods [01:07:44, 01:07:48]. This includes projects on:
- Democracy and education [01:07:13].
- Making global warming a micro issue to empower people at a local level [01:06:51, 01:07:13, 01:07:31].
- Understanding and addressing abuse in partnerships using narrative as a therapeutic device [01:07:01, 01:07:05].
The danger with current AI trends is that they may reduce human intelligence and capability, rather than augmenting it, especially if driven by approaches that lack abstraction or empathy [01:12:22, 01:12:26, 01:12:35, 01:12:42]. There’s a significant gap between what AI can do and what humans can do, which may be permanent [01:12:50, 01:12:54]. Snowden argues for incorporating ethics and aesthetics into the training of software engineers, as aesthetics, related to abstraction and metaphor, enhances an effective decision maker’s ability to understand and appreciate beauty and connect disparate ideas [00:39:41, 00:39:54, 00:40:10, 00:41:00]. This is crucial for recognizing novel or unexpected connections and assessing their possibilities [00:36:31].
There is concern that too much of social science has become self-referential, focused on publication rather than innovation, and often influenced by political agendas [01:09:31, 01:10:13, 01:10:34]. Natural science offers explanation and prediction, while social science can provide explanation but not prediction [01:10:43, 01:10:47]. Applying natural science as a constraint helps focus on what’s actually possible in social systems, avoiding the pitfalls of unscientific management fads [01:10:58, 01:12:02].