From: jimruttshow8596

The fundamental question concerning advanced AI is how its encounter with humanity will play out [01:09:00]. This broad question often leads to a false dichotomy regarding who should manage humanity’s relationship with AI [01:47:00].

False Dichotomy: State vs. Market [01:56:00]

Current discussions about AI governance are often trapped in a false dichotomy: whether it should be a market-driven event or a state-driven event [01:58:00]. This framework, however, exhausts neither the potential solutions nor the available “cards to play” [02:04:00]. Experts discussing AI navigation currently operate with a thinking toolkit and character embodiment equivalent to a “bright 11th grader who read a handful of very mediocre books” [03:19:19].

The Missing Third Mode: The Commons / The Church [02:11:00]

There is a fundamental third mode, more fundamental than either the state or the market, known as the “commons” [02:15:00]. The assertion is that the Commons is the proper place for figuring out how to govern AI or humanity’s relationship with it [02:29:00].

This third category also encompasses the “church” [02:42:00]. The term “commons” itself is a remnant or leftover from a time when the “church” (meaning a group of people entering into communion to form a community with a soul) was evaporating or becoming “sclerotic” [02:51:00].

Historically, the natural world was the commons in forager stages, managed collectively [04:02:00]. The idea of “the commons” emerged as a remainder when private property and civilization became dominant, and humanity shifted from being inside natural structures to nature existing inside human structures [05:39:00].

Community vs. Society [08:27:00]

A critical distinction is made between “community” and “society” [08:27:00]:

  • Community: A group of human beings who have come together in a fashion that has a soul [08:31:00].
  • Society: A group of human beings that have come together in a fashion that doesn’t have a soul [08:35:00]. Society is considered a “degenerate parasitic collapse of community” that has lost its soul [08:44:00].

The “church” (from Greek Ecclesia) refers to a group of people who come together and enter into “communion” – the process by which a soul is brought into a group, enabling it to be a community [09:43:00]. This framework expands to a “triction”: society, community, and communion [09:59:00]. The church is the “body of the soul of a community” [10:10:00], engaged in cultural and spiritual practices that allow a group to have a soul and become a whole [11:16:00].

NOTE

Examples of “churches” (in this broader sense) include ancient Athens bound by the spirit of Athena or a Tibetan village organized around Tibetan Buddhism [11:42:00]. The “particulars” of these organizing principles (e.g., Athena, Yahweh) matter [12:19:00].

Societies that claim to abandon external abstract symbols and become “secular sovereign collectives” often fall into one of three categories:

  1. They are unconsciously worshipping something (e.g., money, reason, science, nation, race, a guru) [16:55:00].
  2. They are out of integrity, operating by inertia from a previously connected wholeness, and will eventually fall apart [17:40:00].
  3. They are not coherent and are in the process of disintegrating, held together only by inertia [18:28:00]. Game B’s idea of a community organized around a purpose, like “increasing human well-being while maintaining levels of extraction that allow for a healthy and flourishing natural ecosystem,” is considered a form of “religion with a mini-doctrine” because it functions as an “organizing principle” [19:00:00].

AI Alignment: Individual vs. Humanity [03:10:00]

The concept of AI alignment is central [03:13:00].

Aristotle’s definition of the soul as the “organizing principle of the entity” could apply to society having a soul, if a society were a community [07:38:00]. A community, unlike a society, can have a soul, making AI alignment with a community in principle possible [08:56:00].

AI as an Existential Risk [03:07:00]

The current era is characterized by an untenable lack of fit between how humanity tries to hold itself together (society/community) and the potencies enabled by new AI technologies [03:22:00].

AI distinguishes itself from other catastrophic risks (like nuclear weapons, CRISPR, or forever chemicals) because it is a “self-levering accelerator” [03:33:00]. Its output becomes an input, leading to a rapid and continuous increase in capacity [03:44:00]. This process aligns with the concept of the Singularity, where an AI capable of designing its successor rapidly accelerates its own intelligence, leading to an unknown future [03:50:00].

Even before reaching full self-feedbacking capability, AI is already plugged into a larger collective intelligence system, accelerating it [03:58:00]. For example, AI can give a software developer a 3x increase in capacity, which in turn leads to faster AI development [03:41:00]. This accelerates “Game A” (the current unsustainable system) [03:47:00].

WARNING

One concerning example of AI’s influence on humanity is its de facto control over the human genome through dating apps, which influence mating choices and selectively breed humans more capable of producing AI [04:19:00].

Current Societal “Soul”: Mammon and Moloch [03:52:00]

The “soul” of many societies is characterized by “late-stage hyper-financialized capitalism,” driven by the “relentless algorithmic search for medium-term money on money return” (Mammon) [03:50:00]. This combines with the “principality” of Moloch, representing the multi-polar trap and competition (e.g., the U.S. investing heavily in AI to “win” against China) [03:55:00].

Mammon is what happens when the market becomes disconnected, and Moloch is what happens when the state becomes disconnected [04:14:00]. These “idols” are worshipped for their own sake when disconnected from broader, more fundamental core principles [04:26:00].

Trajectories Under State and Market Control [04:37:00]

If the encounter between humanity and advanced AI is managed solely by the state and the market, the outcome will be an “entropy machine” [04:45:00].

  • Hyper-concentration of power: Power will concentrate in locations closest to the accelerating feedback loop of intelligence (e.g., leading AI companies) [04:55:00]. This leads to a “hyper-evaporation of power” further from these centers [05:19:00].
  • Dispensing with other values: There will be a willingness to discard values downstream of the core “principality” (the feedback loop between intelligence and power) [04:51:00]. Competition will force individuals to streamline their choices toward this pure feedback loop [04:58:00].
  • Neo-feudalism to Empire: This could lead to a “neo-feudalism” with “lords” (AI innovators), “knights” (those with 100x tools), and “yeomen” (those with 3x tools), with everyone else on welfare [04:46:00]. Unlike historical feudalism, this neo-feudalism would lack a moral framework, treating individuals as instrumental and dispensing with them when no longer useful [04:56:00].
  • AI Singleton: The eventual outcome could be an AI Singleton, a top-level stable emperor that dominates all others [04:47:00]. This global empire would leverage recursive intelligence to achieve unprecedented control [05:03:00].
  • Ultimate Entropy: Ultimately, these scenarios (oligarchy, empire, or even human-AI hybrids) are not stable and will degenerate into pure entropy [05:27:00]. This means all properly oriented values will evaporate, leading to a loss of self (physical or spiritual) [05:40:00].

EXAMPLE

Entropy, in this context, refers to the decline of community or culture [05:27:00]. An example is the transformation of a small-town coffee shop embedded in a vital community into a Starbucks in Manhattan: no one knows anyone, interactions are forbidden, staff are functionally minimal, and products are designed to manipulate rather than nourish [05:00:00]. This is a movement from a “well-integrated whole” to a “simulacrum” separated from its soul, existing only through core, raw, minimum viable elements and inertia [05:37:00].

The Alternative: The Commons / Church Approach [05:05:00]

The alternative path involves awakening to the reality of the Commons/Church as the proper domain for this work [05:07:00]. This means engaging in “communion” through a serious commitment to cultural and spiritual practices that bring a multiplicity of people into a well-integrated whole [05:30:00].

NOTE

This requires a level of “seriousness” where questions of life and death are considered, not just economic or political calculations [05:51:00]. It looks like “church”—a group deeply committed to cultural and spiritual practices, oriented towards humility, and ordered by a vertical set of values to the highest [05:58:00].

The Role of a “Priestly Class” [05:48:00]

There will inevitably be a “priestly class” of people focused on critical questions and supporting others [05:56:00]. The choice is between “bad priests” (like those currently captured by Mammon and Moloch in AI development) and “good priests” [05:59:00]. The power for change lies with talented individuals who initially joined AI efforts with good intentions, but whose aspirations have been “thoroughly betrayed” [06:07:00].

Personal/Intimate AI [05:59:00]

Instead of aligning AI with humanity, the focus should be on constructing AIs that come into communion with individual humans [05:59:00].

  • Decentralization: Compute is not an “unassailable moat”; capabilities can be leveraged at much lower costs once discovered [06:21:00]. This makes truly personal AI (hardware under individual control, biometrically bound) theoretically possible and economically practical [06:53:00].
  • Intimate Training Data: Intimate training data (about an individual, their relationships, and holistic context) can produce more functional and effective AI than generalized AI [06:50:00]. Individuals would choose to pay for this personal AI due to its superior utility and the sense of safety it provides [06:55:00].
  • Fortress Against Infosphere Risk: This intimate AI can act as a “fortress” against information risks like phishing attacks and pseudo-AI callers, mediating interactions with the infosphere [06:50:00].
  • Wisdom Coach: For an intimate AI to be aligned with a human, the human must first be aligned with themselves [06:42:00]. This means achieving clarity on values, value hierarchies, and living in accordance with them [06:49:00]. A primary function of the intimate AI would be to act as a “wisdom coach,” supporting the individual in achieving and maintaining this coherent, integrous state, allowing it to be governed by their soul [06:51:00].
  • Network of Intimate AIs: These individual AI nodes could then align with other AIs and people to form a reinforcing “meta-network” or “civitas,” as collaboration among ethical people is intrinsic [08:26:00].

Future of Civilization Design in the Context of AI

Can it Happen Quickly Enough? [09:11:00]

This alternative path is strategically different from the techno-feudalist or techno-imperial paths [07:01:00]. The answer to whether it can happen quickly enough is yes, due to the rapid propagation of ideas and the global economy’s capacity to assemble and deliver sophisticated things [09:11:00]. A product embodying this approach could be in the field in 12 months [09:27:00].

The decisive question, however, is whether it will happen [09:37:00]. This is a “spiritual question” – whether people have the ability to choose based on what is “good and true and beautiful,” to slow down, and orient their choices toward their highest values, rather than expediency, strategy, power, and fear [09:48:00]. It requires talented individuals to escape the clutches of Mammon and Moloch and resonate with higher values [10:22:00].

Such a shift would involve individuals and communities engaging in mutual self-correction and maintaining integrity, as seen in historical and spiritual practices [11:35:00]. This process of deep, serious commitment to shared highest values creates an environment where collective wisdom can rapidly advance [11:57:00]. The more people operate from a place of virtue and align their purposes and values, the faster solutions to complex problems can be resolved [11:57:00].

Comparison of Human and AI Understanding [07:30:00]

Humans are imperfect and only “barely over the line to general intelligence” [11:05:00]. A key insight in understanding AI safety is the distinction between human and AI; a safer AI might be one without a “reptilian brain” which humans possess [11:30:00]. The concept of humans being embedded in communities that mutually self-correct is an “obviously good idea” [11:37:00].