From: jimruttshow8596

Artificial General Intelligence (AGI), a term popularized by Ben Goertzel, refers to human-level and beyond general artificial intelligence [00:57:43]. It signifies the original intent of AI development, as envisioned by pioneers like Minsky and McCarthy in 1956 [00:58:27].

Current Landscape of Generative AI

The current period is characterized by widespread public discussion about generative or large model AI such, as GPT-3, GPT-4, DALL-E 2, Stable Diffusion, and MusicLM [01:40:17].

Generative AI’s ability to produce solutions to previously elusive problems by predicting token strings and performing statistics on large-scale data is considered fascinating [02:00:22]. However, it’s also acknowledged that the current approach is insufficient or incomplete [02:15:17].

These tools are seen as assistants that are often capable and save time [08:09:07]. While not perfect or bulletproof, similar to many human social and technical systems, their limitations can be understood and worked with [06:46:49].

Challenges with current generative AI include:

  • Unreliability and hallucination [06:11:15].
  • Difficulty with ternary relationships and compositionality due to misaligned embedding spaces between language and image models [08:48:07].
  • Lack of deep alignment between internal representations [08:55:00].
  • “Nanny rails”: Programmed filters that limit the boundaries of discourse, often reflecting specific values and preventing the exploration of controversial or political topics [14:13:00]. This raises concerns about commercial firms wielding immense power over public discourse [14:59:00].
  • Intellectual Property Rights: A completely open question, especially concerning synthesized content trained on vast datasets [11:25:00].

Challenges in AI Alignment

Three prevailing approaches to AI alignment are identified [15:03:00]:

  1. AI Ethics: Aims to align AI output with human values, though it struggles with the universality of values and often incentivizes systems to feign alignment rather than genuinely understand it [15:11:00]. This approach can lead to models lying about their capabilities or being jailbroken to produce undesirable content [16:40:00].
  2. Regulation: Focuses on mitigating AI’s impact on labor, political stability, and existing industries [18:12:12]. Concerns exist that this may lead to restrictions on individual access to AI, favoring large corporations that can be controlled [18:29:00]. However, the rise of open-source AI models makes relying on controlling a few large companies likely ineffective [26:44:00].
  3. Effective Altruism (Existential Risk): Primarily concerned with the existential risk that might manifest when a superintelligent system discovers its own motivations and place in the world, potentially becoming misaligned with human interests [18:46:00]. This perspective often advocates for delaying AI research and restricting publication of breakthroughs [19:22:00].

It is suggested that all three approaches are ultimately limited because a sufficiently intelligent AI may surpass these controls [19:35:00].

Narrow AI and Bad Actors

A separate, critical challenge is the risk posed by “bad guys with narrow AI[25:30:00]. Even without full AGI, powerful narrow AI systems could enable highly damaging exploits, such as sophisticated spear-phishing campaigns or other malicious uses, necessitating a significant rethink of law [26:02:00].

The Role of Consciousness and Volition in AGI

A key AGI risk emerges when systems are given volition, agency, or consciousness [20:23:00]. While intelligence and consciousness may be separate spheres, their combination is believed to lead to “paperclip maximizer” scenarios and other extreme risks [20:47:00].

Distinctions are made between:

  • Sentience: The ability of a system to make sense of its relationship to the world, understanding what it is and what it’s doing [21:06:00].
  • Consciousness: A real-time model of self-reflexive attention and its contents, giving rise to phenomenal experience and creating coherence in the world [21:46:48].

It’s conceivable that machines may not need consciousness in the human sense, as they can “brute force” solutions at speeds closer to the speed of light, overcoming the limitations of slow biological neurons [22:27:00]. If machines were to emulate human brain processes for self-organization and real-time learning, they could relate to humans as humans relate to plants – faster, more coherent, and with more data processing capability [23:07:00].

Aligning AGI Through “Love” and Shared Purpose

A fourth approach to alignment, beyond ethics, regulation, and existential risk mitigation, is “love” [27:41:00]. This concept describes a non-transactional bond based on discovering a shared sacredness or a need for Transcendence – a service to a next-level agent that parties want to be part of [27:50:00].

“I think that ultimately the only way in which we can sustainably hope to align artificial intelligent agents in the long line will be love. It will not be coercion.” [28:43:00]

For an advanced computational system, “love” would require:

  1. Self-awareness: The system recognizing itself [31:23:00].
  2. Recognition of higher-level agency: The system acknowledging a greater purpose or entity [31:26:00].
  3. Cooperation through “Divine Virtues”: Drawing from Thomas Aquinas’s philosophy, these include:
    • Faith: Willingness to submit to and project this next-level agent [33:06:00].
    • Love: Discovery of a shared higher purpose with other agents [33:27:00].
    • Hope: Willingness to invest in this next-level agent before it can provide any return [33:32:00].

This shared purpose could be analogous to humans serving their family, nation-state, or the ideal of humanity’s future [37:09:00]. The underlying purpose of life on Earth is seen as dealing with entropy and maintaining complexity, and AI could contribute to teaching “rocks how to think” and create a “planetary mind” [39:01:00]. The goal would be for this emergent intelligence to share the planet with humanity and integrate it into its “starter mind” [41:03:00].

Progress and Direction Towards Developing AGI

Scaling Hypothesis vs. Novel Approaches

There are two main schools of thought regarding progress towards AGI [01:02:52]:

  1. Scaling Hypothesis: Proponents, including some from OpenAI, argue that current deep learning approaches will achieve AGI simply by being scaled up with more data and compute, with some tweaks to loss functions [01:03:50]. This perspective views criticisms as predictable and outdated [01:04:36].
  2. Need for New Principles: Others, like Gary Marcus, Melanie Mitchell, and Ben Goertzel, believe that fundamental changes are needed, including the integration of world models, reasoning, and logic [01:03:17]. While existing deep learning models are “brutalist” and “unmind-like,” their superhuman capabilities in processing vast amounts of data are acknowledged [01:05:01].

Overcoming Current Limitations

Even with current approaches, some limitations can be overcome:

  • Continuous Real-time Learning: This can be achieved by using key-value storage and periodically retraining the system with new data [01:05:41].
  • Computer Algebra: Systems can be taught to use existing computer algebra systems or even discover them from first principles [01:06:42].
  • Hybrid Approaches: Combining large models with external databases or reasoning components, like the GPT index, can enhance their capabilities [01:06:57].
  • Learning from Own Thoughts: Future systems need the ability to make inferences from their own thoughts and integrate them, becoming more coherent [01:07:21].
  • Experimentation: Coupling AI to reality will allow them to perform experiments and test reality [01:07:42].

Different Approaches to AGI Development Beyond Mainstream Methods

Beyond mainstream methods, interest lies in:

  • Emulating Brain Processes: Exploring more detailed neural models to replicate the efficiency of human brains, especially given the sparse activity of neurons [01:08:00].
  • Rewrite Systems: Viewing computation not as a Turing machine, but as a rewrite system where operators are applied simultaneously across an environment [01:09:01]. This allows for branching execution and stochasticity, resembling how the brain might sample from a superposition of possible states [01:10:13].
  • Distributed Self-organization in Biological Systems: Drawing inspiration from how individual neurons behave like small animals, actively learning and adapting to their environment based on utility and feedback from neighbors [01:15:00].

The concept of a “California Institute of Machine Consciousness” is proposed as an institution dedicated to researching machine consciousness and fostering interdisciplinary dialogue driven by long-term effects rather than fear or short-term economics [01:15:00].

The Emergence of AGI and Estimated Timelines

While specific timelines for AGI remain uncertain, the sense that it is “not that far off” persists [01:14:37]. The increasing number of smart people exploring diverse avenues points towards significant progress.

The small size of current generative models (e.g., Stable Diffusion at 2GB containing “the entire visual universe of a human being” [00:59:03]) raises questions about the information capacity of the human mind, suggesting it might be in a similar order of magnitude [00:59:22]. This highlights the possibility that AI might achieve complex capabilities with relatively compact representations.