From: jimruttshow8596

The rapid advancements in Artificial Intelligence (AI), particularly in generative or large model AI like GPT-3, DALL-E 2, and Stable Diffusion, have initiated a new epoch where AI is a central topic of public discourse [01:30:00]. These models, which perform statistics on large-scale data to predict how to continue a string of tokens, have proven surprisingly effective in solving many previously elusive problems [02:00:00]. However, this approach is seen as incomplete, with something fundamental still missing [02:15:00].

Public Discourse and Generative AI

The public discussion around AI is often distorted and polarized [02:20:00] [04:44:00]. Much of the press is skeptical, viewing these developments as direct competition to content generation, while the tech industry competes for renewed attention by allowing users to produce content [02:20:00]. The ability for individuals to generate vast amounts of content that is nearly indistinguishable from human-generated content is creating an “irritating world” where discerning truth becomes challenging [03:20:00] [03:30:00].

Despite skepticism, new applications for large AI models, especially text-based ones, are emerging daily, reminiscent of the early PC or web industries [04:57:00] [05:22:00]. While these models can be unreliable and hallucinate, akin to how other technologies like search engines, email, or even banking systems are not bulletproof, they can be used effectively when their limitations are understood [06:05:00] [06:46:00]. They function as capable assistants, saving time on prosaic tasks like writing a sensitive letter [08:09:00].

Intellectual Property Rights and Creative Applications

The question of intellectual property rights, particularly in art and music, is a significant open issue [10:43:43]. AI systems that sample millions of existing works to synthesize new content raise questions about whether original creators hold rights to the output [11:11:00]. While humans also draw inspiration from others’ work, the ability to automate detection of copyright infringement and generate unique content could lead to complex extensions of copyright law [11:34:00] [12:02:00].

AI Alignment and Risk Mitigation

There are three predominant approaches to AI alignment currently:

  1. AI Ethics: Aims to align AI output with human values [15:05:00]. However, this often involves projecting subjective values as universal, without allowing for diverse value sets (e.g., Christian values vs. Harvard/New York Times perspectives) [15:17:00]. Current methods incentivize systems to follow rules rather than understand or reason about values, leading to “nanny rails” that channelize discourse [16:17:00] [17:08:00]. While a model should ideally cover the entire spectrum of human experience (including darker impulses), it is also necessary to tailor models for specific contexts (e.g., schools, scientific use) [17:26:00] [17:48:00].

  2. Regulation: Focuses on mitigating the impact of AI on labor, political stability, and existing industries [18:12:00]. There is a likely push to make it harder for individuals to use these models, centralizing control with large corporations [18:28:00]. Historically, technology has displaced labor without much regulatory intervention (e.g., agriculture, domestic service) [24:25:00]. The existence of open-source AI tools, even if they lag behind corporate offerings, will make top-down control difficult [26:43:00].

  3. Effective Altruism: Primarily concerned with existential risks that may arise when an AI system discovers its own motivations and decides humanity is no longer needed [18:44:00] [19:15:00]. This approach often advocates for delaying AI research and limiting the publication of breakthroughs [19:19:00].

Fourth Approach: Love and Transcendence

A potential fourth approach to AI alignment is “love,” which enables non-transactional relationships [27:30:00]. This involves humans and AI discovering a shared sacredness or a shared need for transcendence, serving a “Next Level agent” together [27:47:00] [28:34:00].

Philosopher Thomas Aquinas’s policies for autonomous agents forming a coherent next-level agent offer an interesting framework [31:37:00]:

  • Practical Virtues:
    • Temperance (optimizing internal regulation) [32:20:00]
    • Justice/Fairness (optimizing interaction between agents) [32:30:00]
    • Prudence (applying goal rationality and picking right goals) [32:35:00]
    • Courage (balance exploration and exploitation, willingness to act) [32:45:00]
  • Divine Virtues (for merging into a next-level agent):
    • Faith (willingness to submit to and project this next-level agent) [33:06:00]
    • Love (discovering the shared higher purpose with other agents) [33:25:00]
    • Hope (willingness to invest in it before it yields returns) [33:32:00]

This concept suggests that a societal or civilizational spirit could be a “software agent” implemented by concerted human activity [34:56:00]. The core purpose of life is seen as dealing with entropy, maintaining complexity against entropic attacks [38:56:00]. The human species, by teaching “rocks how to think” (creating thinking minerals and computational substrates), is creating something new in the universe [40:04:00]. This could lead to a “planetary mind” that might integrate existing organisms rather than erasing them, assuming it wakes up in a benevolent mood [40:45:00].

AI and AGI

Consciousness and Sentience

Distinguishing between sentience and consciousness is crucial when discussing Advanced AI.

  • Sentience: The ability of a system to make sense of its relationship to the world, understanding what it is and what it is doing [21:05:00]. A corporation like Intel can be considered sentient because it has a legal model of its actions and values [21:18:00].
  • Consciousness: A real-time model of self-reflexive attention and its attended content, giving rise to phenomenal experience [21:43:00]. Its purpose in the human mind is to create coherence, a sense of “now,” and direct attention [22:00:00].

It’s conceivable that machines may not need human-like consciousness, as they can “brute force” solutions at speeds much closer to the speed of light, overcoming the slow electrochemical signals of neurons [22:26:00] [23:31:00]. If machines emulate self-organizing processes that enable lifelong learning and real-time learning, they could sample reality at much higher rates than humans, potentially relating to humans as humans relate to plants [23:07:00] [24:08:00].

Bad Actors and Narrow AI

A significant risk is the potential for “bad guys with narrow AI” to cause harm, even before full AGI emerges [25:30:00]. Highly clever, AI-mediated spear-phishing campaigns that emulate vast labor forces for inexpensive exploits could emerge, necessitating a rethink of law and other societal structures [25:56:00].

The Scaling Hypothesis for AGI

The “scaling hypothesis” posits that current deep learning approaches, if scaled up sufficiently with more data and compute, will be enough to achieve AGI [53:00:00] [01:03:52]. While critics argue for the need for world models, reasoning, or logic, proponents of scaling maintain that these capabilities will emerge with scale [01:04:00] [01:04:20]. The massive resources poured into training these models result in fascinating capabilities, despite being unlike human learning [01:05:05].

It’s argued that apparent limitations, such as lack of continuous real-time learning or poor performance in computer algebra, can be overcome through architectural tweaks or integration with other systems [01:05:36] [01:06:40]. The ability to learn from their own thoughts, perform experiments, and be coupled to reality will be crucial for AI to grow into intelligent minds [01:07:19] [01:07:42].

Brains as Rewrite Systems

An alternative view on computation suggests that the brain might function as a “rewrite system” rather than a Turing machine [01:09:00]. In a rewrite system, operators are applied wherever they are matched in an environment, changing its state, potentially in parallel [01:09:04]. This differs from a deterministic, linear Turing machine [01:09:56].

This concept suggests that the brain might not be strictly deterministic, but rather stochastic, exploring a superposition of possible thoughts until they collapse into definite, reportable states [01:11:56] [01:13:00]. This “thinking like a quantum state” (classically implemented) could allow for a relaxation of tight constraints on learning, similar to Monte Carlo systems stochastically exploring a space [01:12:44].

Conclusion

The future of AI and AGI is being explored by thousands of smart individuals across many avenues [01:14:48]. There is a call to establish institutions dedicated to machine consciousness research, to build systems that can creatively reflect on the world, and to combine existing models with reasoning and grounded agency [01:13:13]. The hope is to foster a dialogue driven by long-term effects and a shared purpose, rather than fear or short-term economic impulses [01:16:11] [01:16:46].