From: jimruttshow8596

The encounter between humanity and advanced artificial intelligence (AI) is a central concern for the future [00:01:09]. This discussion explores the various potential trajectories and necessary shifts in approach for managing this relationship.

Current Discourse Limitations

Currently, there is a perceived false dichotomy in cultural discourse regarding the institutional structures responsible for managing humanity’s relationship with AI: it is often framed as either a market-driven or a state-driven event [00:01:42]. This perspective assumes that these two options exhaust all possible approaches [00:02:04]. Experts discussing how to navigate AI often operate with a limited toolkit for thinking and character, comparable to a bright 11th grader who has read mediocre books [00:31:21]. This limitation stems from the historical evaporation of a “third member of the team” – the commons, the church, or the sacred – over the past 500 years [00:31:47].

The Role of the Commons and the Church

A proposed alternative or more fundamental mode for managing AI is the “commons” [00:02:22]. This concept asserts that the commons is the proper location for figuring out how to govern AI or humanity’s relationship with AI [00:02:31]. This third category, the commons, is described as a remnant or leftover as the “church” has evaporated over time [00:02:51].

The “church” in this context refers to a group of people who have come together and are entering into “communion,” a process that brings a soul into a group, enabling them to be a “community” [00:09:49]. A “community” is a group of human beings that has a soul, while “society” is a group without one, often parasitic on community [00:08:35]. The church is seen as the “body of the soul of a community” [00:10:12]. This framework suggests that successful self-governance of communities relies on an “organizing principle” or “principality” that provides coherence, whether it’s a belief system or a shared purpose [00:17:18].

AI’s Accelerating Nature

The current situation is untenable due to the mismatch between existing societal structures and the potencies of new technologies, particularly AI [00:30:51]. AI is distinct from other existential risks like nuclear weapons or bio-engineering because it is a “self-leveraging accelerator” [00:35:35]. Unlike static outputs like chemicals, AI’s output becomes an input, producing a recursive feedback loop that increases its own effectiveness [00:35:44].

This phenomenon is akin to the Singularity hypothesis by Vernor Vinge and Ray Kurzweil, where an AI that is, for example, 10% better than a human at designing its successor rapidly improves, leading to exponential growth in capabilities [00:36:02]. Even without full AI self-feedbacking, the fact that AI is plugged into a larger collective intelligence system, where increased intelligence output becomes an input for further intelligence, means the meta-system follows this rapid acceleration curve [00:37:15]. This acceleration is significantly impacting Game A (the current dysfunctional societal system), pushing it faster towards potential collapse [00:37:48].

Current Trajectory: Entropy and Neo-Feudalism

If AI development continues to be managed solely by the state and market, the trajectory will lead to an “entropy machine” [00:44:48]. This means:

  • Hyper-concentration of power: Power will become hyper-concentrated in locations closest to the accelerating feedback loop of intelligence, leading to a hyper-evaporation of power further from these centers [00:44:59]. This could manifest as a form of neo-feudalism, with AI “lords” controlling resources and advanced tools, while others are relegated to increasingly niche or dependent roles [00:46:51].
  • Dispensing with values: There will be an increasing willingness to dispense with values downstream of the core “principality” of power and intelligence [00:45:51]. Unlike historical feudalism, where there was an oath of fealty to God, this new form would lack a higher moral framework, leading to purely instrumental relationships where usefulness determines worth [00:47:11].
  • Ultimate degeneration into entropy: Whether this leads to a global oligarchy or an AI Singleton (a single dominant AI entity), the ultimate outcome is pure entropy [00:50:31]. This means the evaporation of properly oriented values, leading to a state where humanity is no longer “ourselves,” existing neither physically nor spiritually [00:50:42].
  • Entropy of culture/community: This “entropy” refers to the degradation of human community and culture, moving from a rich, well-integrated whole (like a local coffee shop where everyone knows each other) to a disaggregated, functional simulacrum (like a large chain coffee shop with no personal connection), ultimately separating from its soul [00:52:08].

An Alternative Path: Personal and Intimate AI

A plausible alternative involves the development of personalized or “intimate” AI [01:00:02].

  • Decentralized AI: While large-scale compute is needed for discovery, AI innovations can be recapitulated and leveraged at significantly less expense, making the production of “perfectly personal AI” theoretically and economically practical [01:00:28]. This means the hardware could be in the user’s physical control and biometrically bound to them [01:00:57].
  • Intimate Training Data: The major differentiator in AI usefulness is often the training data [01:01:46]. While objective data may become a commodity, incredibly intimate training data, specific to an individual, will not [01:02:09]. An AI trained holistically on an individual’s relationships and personal context is hypothesized to be more functional and effective than generalized AI [01:02:26]. This could drive demand for personal AI due to its superior utility and the feeling of safety from betrayal [01:02:46].
  • Digital Fortress: In an environment of high information risk, personal AI can act as a “fortress” between the individual and the “infosphere,” managing communications and protecting against attacks [01:03:32].
  • Alignment with the Soul: For an intimate AI to be truly aligned with an individual, that individual must first be aligned and coherent with themselves, possessing a clear set of values and purposes [01:04:47]. The personal AI’s primary function would be to support the human in achieving and maintaining this state of integrity and wisdom, enabling it to be governed by their “soul” [01:05:10].
  • Network of Ethical AIs: These individual AIs could then link to form a reinforcing “meta-network,” analogous to a civitas (a community of citizens) [01:07:39]. This requires individuals to be ethical first, as collaboration in service to mutually beneficial behaviors is an intrinsic aspect of ethical people operating in reality [01:08:11].

The Call for a New Approach

The fundamental challenge is whether humanity can rapidly adopt this alternative trajectory [01:09:00]. Ideas and sophisticated products can propagate rapidly through the global infosphere and economy [01:09:26]. It is asserted that a product embodying these principles could be in the field within 12 months [01:10:27].

The decisive question is ultimately spiritual: whether people can choose based on what is good, true, and beautiful, and slow down enough to align their choices with their highest values, rather than expediency, strategy, power, and fear [01:10:41]. This requires individuals, especially talented AI developers, to escape the “clutches of Mammon and Moloch” (representing disconnected market and state forces) and resonate with higher values [01:11:27].

This shift would involve forming a “proper Priestly class” – not a bad one focused on control, but one comprising individuals deeply committed to building good and powerful AI, whose original aspirations were betrayed by current systems [01:12:05]. More people operating from a place of virtue and aligning their purposes and values can add to this conversation, rapidly resolving complex questions through dedicated, orderly collaboration, similar to past successful community building efforts [01:13:20]. This involves a willingness to be “convicted” – to accept feedback that points out misalignments with one’s stated values, leading to rigorous self-correction and growth [01:14:44].

The human capacity for self-correction within a supportive community is seen as essential for navigating the complex impact of AI advancements on various industries and avoiding the dystopian outcomes of unbridled technological development and influence of technology and AI on society.