From: jimruttshow8596

Artificial Intelligence (AI) is considered a “missing link between philosophy and mathematics” [01:34:00]. Its aim is to automate and scale the execution of processes that allow language to refer to meaning, thereby grounding human language in a mechanical, understandable universe [01:42:00]. The universe is not magic; it operates based on mechanisms without internal “conspiracy” [02:08:00]. Thus, AI offers a testable theory for understanding human nature [02:26:00].

Evolution of AI and the Pursuit of AGI

Initially, AI was a highly optimistic and interdisciplinary field, with pioneers expecting to teach computers to think within a few years [04:10:00]. Significant early achievements included chess-playing programs and basic language understanding [05:03:00]. Many fundamental programming language principles were also developed during this period [05:21:00].

However, the daunting long-term goal of building a machine that thinks led to a shift towards more applied, narrow AI focused on improving statistical automation to achieve tangible, short-term results and academic tenure [06:04:00].

A significant political upheaval in the field occurred when Marvin Minsky asserted that cognitive AI was synonymous with symbolic AI [06:54:00]. He is criticized for actively delaying the development of dynamical systems models and neural networks by influencing funding, contributing to the decline of cybernetics in the U.S. and inadvertently creating a lasting division between cognitive AI and fields like image processing and environmental interaction [07:09:00].

Despite this, many researchers, especially at large technology companies, are still motivated by the original goal of Artificial General Intelligence (AGI) [08:28:00]. The pursuit of AGI forces fundamental questions about human existence and the universe [03:03:00]. There is continued optimism for creating human-level and superhuman computer intelligences, as the brain is not seen as magical, and remaining philosophical questions are considered solvable technical details [10:33:00].

Human vs. Machine Intelligence

Humans may be approximately “the stupidest possible general intelligence” [12:33:00], with significant limitations in working memory size and memory fidelity [12:01:00]. Evolution is seldom “profligate with its gifts” [12:40:43]. The hope is that AI can surpass human capabilities, particularly in areas like language understanding and processing vast amounts of literature [12:54:00].

The cost of developing such AI models, like GPT-2 or GPT-3, involves moderate financial investments (e.g., two-digit millions of dollars for training) but allows processing immense amounts of data (e.g., years of internet text) in days or weeks, a feat impossible for humans [16:15:00]. While these models don’t currently possess sentience or a unified model of the universe, their capacity for generating coherent text and images that pass a Turing test for human-like output is striking [17:52:00].

The ability to operate in three dimensions and the need for simplified rules in societies (e.g., uniform laws for financial systems) are discussed as analogous to the challenges of building AI, where defining lower-level units facilitates higher-level emergence [00:30:08].

Philosophical Underpinnings of Mind and Reality

The concept of “matter” in physics is seen as a way to discuss information and measure periodic changes in place and momentum [31:42:00]. Physics itself is the hypothesis that there is a “causally closed lowest layer” describing the universe, a very successful and currently unchallenged hypothesis [32:28:00].

The idea of “idealism,” where conscious experience is primary, struggles to explain the external reality that “dreams us” [33:57:00]. Conversely, the notion of mind emerging from brains, building up from physics to chemistry, biology, neurons, and nervous systems, is a more common materialist perspective [34:47:00].

The “Operating System” of Mind and Society

The term “spirit,” now often dismissed as superstitious, can be understood as an “operating system for an autonomous robot” [38:32:00]. Similarly, societies and civilizations have operating systems that are not physical but exist through the “coherent interactions of individuals” [39:10:00].

This perspective highlights that complex entities like business companies are virtual constructs, “standing wave[s] essentially of action and motion,” yet they exert real impact on the physical world (e.g., a coal mining company digging coal) [45:01:00]. While tracking these entities at an atomic or even human level would be absurd, abstract models like accounting allow for precise predictions and influence real-world outcomes [45:53:00].

Feedback Loops and Emergence

The concept of feedback loops is ancient, predating modern control theory, and is critical for creating higher levels of complexity in systems [40:58:00]. However, dynamical systems models based on feedback are considered “just models” of the statistical dynamics of too many parts to count, rather than inherently “real” [42:22:00]. Reality is seen as having layers of description where coherent models can be formed, which we then organize into hierarchies with “false relationships” [43:44:00]. For example, personality is a “model category” for human behavior [44:11:00].

“The mind does not exist as a physical thing; it exists over the coherent interactions of the neurons or whatever are the constituting parts.” [39:21:00]

The human brain relies on a long childhood for development, which allows for more training data and the creation of better abstractions [01:17:09]. This extended maturation period may be a key difference between human and ape intelligence, possibly linked to bipedalism and the need for premature birth [01:20:03]. Brain size alone is not the sole deciding factor in intelligence [01:22:20].

The Role of Symbols and Language

The ability to perform grammatical decomposition in language is a key distinguishing factor between humans and other apes [01:22:50]. While animals like elephants can learn to draw, they don’t generalize or create symbolic depictions; they reproduce strokes [01:22:58]. Similarly, gorillas raised in human-like environments don’t achieve grammatical decomposition in their “drawings” or language use [01:23:47].

The use of symbols and recursive language represents a massive compression of information, making the brain exponentially more effective than merely manipulating images [01:24:20]. The ability to grasp increasingly abstract concepts is a predictor of cognitive ability, as observed in fields like computer science [01:26:05].

Cognitive Architectures and Future Directions

Cognitive architectures, originating in psychology with strong influence from cybernetics and AI, aim to identify and implement the structure and principles of the human mind [01:29:35]. While machine learning primarily focuses on learning principles and functional approximation, cognitive architectures address the specific organization of the human mind for feats like language learning, social interaction, and symbolic reflection [01:30:07].

Unlike typical layered neural networks, the brain is organized into complex, interconnected regions, resembling a city with various transport networks (local, long-range, general interconnection) [01:31:12]. Modern AI models, like transformers (e.g., GPT-2/3), achieve long-range dependencies in text and images by binding features across dimensions into a relational graph using “attention” and “self-attention” [01:35:19]. This allows for coherence in lengthy generated outputs [01:35:40].

Psi and MicroPsi

Dietrich Dörner, a German psychologist and cybernetician, developed the Psi theory, which influenced Joshua Bach’s work. Dörner, initially optimistic about completing the project of building thinking computers by the late 1970s [01:40:03], independently developed many AI ideas, including situated agent architectures with autonomous motivation based on cybernetic feedback loops [01:40:30]. He was one of the few psychologists championing theoretical psychology to bridge AI and psychology [01:41:03].

Bach systematized Dörner’s Psi theory in his book, “Principles of Synthetic Intelligence (Psi): An Architecture of Motivated Cognition,” translating its concepts into something implementable and accessible to AI and cognitive science communities [01:43:00]. This work led to the MicroPsi project, which aims to provide a computational framework for cognitive architectures [01:44:01].

Future directions for cognitive architectures include:

  • Leveraging large-scale, distributed computation platforms (e.g., Apache Ignite, Flink, Spark) to manage complexity and scale, moving beyond single-machine or small-cluster designs [01:46:32].
  • Focusing on how different parts of an architecture implement general principles that allow them to learn to interact with each other, rather than homogeneous representations across systems [01:47:19].
  • Allowing systems to self-organize to take advantage of network realities for maximized performance [01:48:06].