From: jimruttshow8596

The concept of measuring complexity is inherently difficult due to the multifaceted nature of “complexity” itself [01:29:17]. When attempting to define it mathematically, researchers often encounter numerous approaches, each with its own strengths and limitations [01:28:47]. This challenge is evident in the historical attempts to quantify complexity, leading to a diverse array of measures applicable across various domains [02:24:30].

The Challenge of Defining Complexity

From early attempts in the mid-1980s by researchers like Seth Lloyd, the goal was to find a single mathematical measure, akin to Spock’s precise readings [01:48:00]. However, this proved elusive, as discussions with experts at the Santa Fe Institute in 2002 yielded 20 different ideas on how to measure complexity [02:22:00]. In fact, Lloyd’s own research in 1988 identified “31 measures of complexity,” a number far from exhaustive [02:40:00].

A core difficulty lies in distinguishing complex systems from complicated systems or simple ones [03:00:00]. A single electron, for instance, is simple but requires complex theory to understand, while three interacting electrons become complex [03:02:00]. Systems like the metabolism of a bacterium are clearly complex due to thousands of interacting chemical reactions and feedback loops [03:40:00]. Yet, assigning a single number to its complexity is challenging [04:40:00].

A key distinction often made is that complex things should require a lot of information to describe, but shouldn’t be random [09:25:00]. This highlights a fundamental problem: what one measure considers complex, another might deem simple [04:50:00]. Different fields often develop their own context-specific measures [05:25:00].

Types of Complexity Measures

Algorithmic Complexity (Kolmogorov Complexity)

This measure, also known as Kolmogorov Complexity, defines the complexity of an object (like a string of numbers or a picture) as the length of the shortest computer program required to generate it [08:36:00].

  • Pros: It accurately reflects the simplicity of highly ordered sequences. For example, a billion ones (1111...) is algorithmically simple because a short program can generate it (print 1 billion times) [07:30:00].
  • Cons: It assigns the highest complexity to random sequences (like static on a TV screen or a coin flip sequence) because the shortest program to describe them is essentially the sequence itself [08:56:00]. This clashes with the intuitive understanding of “complexity” as being distinct from mere randomness [09:17:17].

Shannon Entropy (Information)

Shannon entropy, developed by Claude Shannon for communications theory, measures the amount of information required to describe something, taking into account statistical regularities [10:11:00]. It’s the same mathematical formula independently discovered in the 19th century for thermodynamic entropy by Maxwell, Boltzmann, and Gibbs, illustrating that entropy is the information needed to describe atomic positions [10:17:00].

  • Application: Useful for compressing messages by assigning shorter codes to more frequent letters or patterns, as seen in Morse code or digital file compression (e.g., ZIP, GIF) [11:41:00].
  • Limitations: Like algorithmic complexity, it can be high for purely random systems, which are not typically considered “complex” in the intuitive sense [07:55:00].

Logical Depth (Charles Bennett)

Introduced by Charles Bennett, logical depth attempts to capture the “effort” or “time” required to produce a complex object from its simplest description [12:17:00].

  • Definition: It measures the number of computational steps a computer must take to produce an output, starting from the shortest program for that output [14:47:00].
  • Examples:
    • A string of a billion ones has low logical depth because its short generating program executes quickly [15:08:00].
    • A random bit string also has low logical depth because its “shortest program” is simply printing the string itself, which is fast [15:20:00].
    • The first billion digits of Pi, despite having a short mathematical description (program), require a very long computational time to produce [13:51:51]. Thus, Pi’s digits exhibit high logical depth [15:53:00].
    • Patterns generated by cellular automata, particularly those computationally universal like Rule 110, can also be logically deep because their complex output emerges from many steps of a simple rule [16:01:00].

Thermodynamic Depth (Pagels & Lloyd)

Developed by Seth Lloyd and Hein Pagels, thermodynamic depth is a physical analogue of logical depth [18:48:00].

  • Definition: It quantifies the amount of physical resources (specifically, free energy) that had to be consumed and dissipated to assemble a system from its actual historical formation process [19:06:00].
  • Example: The metabolism of a bacterium has “humongous” thermodynamic depth because it took billions of years of evolution and immense energy expenditure through natural selection to achieve its current complex state [19:36:00].
  • Connection: It connects physical and computational definitions of complexity via the physics of computation [20:06:00].

Effective Complexity (Gell-Mann & Lloyd)

Co-developed by Murray Gell-Mann and Seth Lloyd, effective complexity aims to combine physical and computational notions of complexity by distinguishing between random and non-random information [21:18:00].

  • Definition: It focuses on the “non-random” algorithmic part of a system’s description [23:01:00]. It describes the organized, functional information, distinct from purely random noise (entropy) [22:57:00].
  • Application:
    • For gas in a room, the effective complexity describes macroscopic properties like percentages of gases, temperature, and pressure, not the random motion of individual molecules [22:09:00].
    • For a bacterium, it would encompass the organized metabolic pathways, DNA, and structures necessary for its function (e.g., taking in food, reproducing), excluding the random molecular wiggling [24:00:00].
    • For engineered systems like a car, effective complexity refers to the blueprints and descriptions required for its functional requirements and manufacturing, not the state of every atom [47:35:00].
  • Subjectivity: Defining effective complexity often requires a subjective decision about what information is “important” based on the system’s purpose or context [26:01:01]. This leads to the concept of coarse-graining.

Coarse-Graining

A concept developed in the 19th century by Gibbs and Maxwell, coarse-graining involves describing a system at a particular scale, effectively “tossing out” information below that scale [29:14:00]. This approach is fundamental to defining effective complexity, as it helps determine the level of detail necessary for a given purpose [29:47:00].

Fractal Dimensions

Emerging from the study of nonlinear dynamical systems and chaos, fractal dimensions describe patterns that are self-similar across different scales, like snowflakes or the Lorenz attractor for weather systems [30:37:00].

  • Application: While chaotic systems are intrinsically unpredictable at a micro-level, they are confined to these fractal structures (strange attractors), which allows for some level of predictability [31:11:00].
  • Relevance: Useful in fields like meteorology, where weather patterns, though complex, can be understood and partially predicted through these fractal structures [34:00:00].

Mutual Information

Mutual information quantifies the information shared between different parts of a multi-subsystem complex system [48:23:00].

  • Definition: It is calculated as the sum of the information of individual pieces minus the total information, representing how much information is common between parts [49:13:00].
  • Role: While a necessary condition for complex systems (e.g., a bacterium’s metabolism has vast mutual information due to communication and chemical exchanges), it is not sufficient. A system of a billion identical bits has high mutual information but is not complex [49:53:00].

Integrated Information (Giulio Tononi)

Integrated information is a more intricate form of mutual information, often associated with theories of consciousness [51:31:00].

  • Definition: It measures not only shared information but also the degree to which the operation of different parts of a complex system can be inferred from each other dynamically [52:03:00].
  • Application: Brains and bacteria, which perform complex information processing, exhibit high integrated information [52:19:00].
  • Controversy: While proponents suggest high integrated information is linked to consciousness, critics argue that even simple error-correcting codes can have high integrated information without being conscious [53:00:00]. This points to the need for clear definitions when discussing concepts like consciousness [54:46:00].

Network Complexity

This broad class of measures addresses complex systems structured as networks [57:21:00].

  • Examples: Communication networks, neural connections in the brain, and power grids [57:29:00].
  • Analysis: Network complexity involves understanding the structure (e.g., different types of power plants, transmission lines) and dynamics (e.g., electricity spread, unforeseen behaviors) [58:02:00]. These systems can exhibit chaotic regimes, especially when pushed to their limits, leading to complex emergent behaviors [58:45:00].

Multiscale Entropy

Multiscale entropy relates to measuring complexity at different levels of coarse-graining [01:00:20].

  • Concept: It examines how much information exists within a system at various scales. A system with high multiscale entropy possesses significant information regardless of the scale at which it’s observed [01:01:24].
  • Examples: Living systems like humans, or large networks like the power grid, exhibit high multiscale entropy, with complexity at the macroscopic level down to individual cells and subcellular mechanisms (e.g., mitochondria) [01:01:27].
  • Role: Similar to mutual and integrated information, high multiscale entropy is often a symptom of complex systems, but not a definitive cause [01:02:50]. Simple fractal systems, for instance, can also have high multiscale information but may not be considered complex [01:02:50].

Conclusion

The core takeaway is that there is no single, universally applicable measure of complexity [01:03:14]. Instead, there are many different ways of measuring complexity, each with its own practicality and suitability for specific applications [01:03:20]. The choice of measure often depends on the domain (e.g., computer science, biology, ecology, engineering) and the specific “purpose” or aspect of complexity one wishes to study [01:03:32]. Ultimately, one should use the measure that is most useful for the task at hand [01:03:52].