From: jimruttshow8596

The measurement of complexity has been a long-standing challenge, with many different approaches proposed across various fields [00:01:25]. When attempting to define a universal measure of complexity, researchers at institutions like the Santa Fe Institute found that twenty different people might offer twenty different ideas [00:02:24]. This suggests that complexity is a broad class of phenomena rather than a single, easily quantifiable entity [00:04:50]. Different fields often develop their own specific measures of complexity that suit their particular needs [00:05:25].

Algorithmic Complexity (Kolmogorov Complexity)

One measure of complexity, particularly relevant in computer science, is algorithmic complexity, also known as Kolmogorov complexity [00:08:36]. This measure quantifies the complexity of an object by the length of the shortest computer program required to generate it [00:07:33].

Characteristics and Examples

  • Simple Objects: A sequence like “1111111111” (a billion ones) has low algorithmic complexity because a very short program can generate it (e.g., “print one a billion times”) [00:07:33], [00:08:45]. Similarly, a salt crystal, despite having Avogadro’s number of atoms, can be described simply due to its highly ordered, repetitive structure [00:06:49].
  • Random Objects: Conversely, a truly random bit string (e.g., from flipping a coin a billion times) has very high algorithmic complexity [00:09:39]. The shortest program to reproduce such a string would essentially be a direct list of all its bits, as there’s no pattern to compress [00:09:47]. Examples include static on a TV screen [00:08:56] or random molecular motions in a room [00:07:56].

Limitations of Algorithmic Complexity

A key critique of algorithmic complexity as a measure of true complexity is that it assigns high complexity to random sequences [00:09:17]. Intuitively, random noise is not considered “complex” in the same way a living organism or a sophisticated machine is [00:09:59]. True complexity is often associated with systems that are difficult to characterize or measure, exhibiting emergent properties, and existing at the “border between disorder and Order” [00:02:06], [00:06:07].

Relation to Logical Depth

Charles Bennett proposed logical depth to address the limitation of algorithmic complexity [00:12:14]. Logical depth measures the complexity of an object by the computational time it takes for the shortest program to produce it [00:14:56].

  • Example: The first billion digits of Pi. While having a relatively short program (e.g., Archimedes’ method) [00:14:03], it takes a significant amount of computational effort to generate these digits [00:14:56]. This contrasts with simple ordered strings (quick to generate) or random strings (where the program is essentially the string itself, also quick to “generate”) [00:15:11].
  • This concept applies to things like patterns produced by cellular automata (e.g., Rule 110), which can generate highly complex patterns from simple rules that require many computational steps to unfold [00:16:10], [00:17:29].

Shannon Entropy (Information Theory)

Shannon entropy, often simply referred to as information, quantifies the amount of uncertainty or “surprise” in a system or message [00:10:11].

Historical Roots

The concept of entropy originated in 19th-century thermodynamics with figures like James Clerk Maxwell, Ludwig Boltzmann, and Josiah Willard Gibbs, who used it to describe microscopic randomness in physical systems [00:10:17]. In the 1930s, Claude Shannon, an engineer at Bell Labs, independently developed an identical mathematical formula to measure information for communication, revealing that thermodynamic entropy and information content are fundamentally linked [00:10:56]. Thus, entropy is the information required to describe the positions of atoms and molecules [00:11:20].

Principle of Compression and Regularities

Shannon entropy also relates to how efficiently a message can be compressed by taking into account its statistical regularities [00:11:32], [00:38:00]. If a message has patterns (e.g., frequently occurring letters or words), it can be compressed [00:11:51]. For example, the Morse code assigns shorter codes to more frequent letters [00:36:48].

Lempel-Ziv-Welch (LZW) Complexity

The LZW algorithm is a practical application of these compression principles [00:38:14]. It’s an adaptive coding method that learns which combinations of letters or patterns occur more frequently in a message and then assigns shorter codes to them automatically [00:39:01]. This allows for efficient, on-the-fly encoding and decoding, achieving the theoretical “Shannon Bound” for communication channel efficiency [00:39:51]. LZW algorithms are widely used in file compression formats like .zip and .gif [00:40:04]. Interestingly, if a message is perfectly compressed, its encoded form will look more random because all regularities have been removed; therefore, compressing it again (LZW twice) will not make it smaller, but actually larger [00:40:40].

Comparison and Role in Defining Complexity

Both algorithmic complexity and Shannon entropy are measures of “how hard it is to describe something” [00:46:01], and they are closely related [00:46:17]. While useful for quantifying information content and compressibility, they highlight a core challenge in defining complexity: systems that are either perfectly ordered or completely random tend to be simple to describe by these measures, even if they contain vast amounts of information [00:07:02], [00:08:18]. This leads to the idea that true complexity often resides in structures that are neither perfectly ordered nor entirely random [00:09:23].