From: jimruttshow8596
Evolution of AI and Societal Impacts
AI is currently in a periodic epoch where it is widely discussed, particularly regarding generative or large model AI systems like GPT-3, soon GPT-4, DALL-E 2, Stable Diffusion, and MusicLM [01:30:00], [01:40:00], [01:43:00], [01:47:00].
Current State of Generative AI
Generative AI demonstrates the surprising effectiveness of data compression and prediction models for continuing token strings and performing statistics on large-scale data, solving many previously elusive problems [01:55:00]. However, the current approach is seen as insufficient or incomplete [02:11:00].
The practical applications of these large models are continually emerging, from text generation (like writing a resignation letter in seconds) to potential text-to-world systems that generate metaverse environments [04:57:00], [05:07:00], [07:27:00]. AI systems function as capable assistants, saving time and effort, though human oversight remains necessary [08:09:00], [08:13:00]. While current generative models like DALL-E are good enough for creating illustrations for blog posts or Twitter, they are not yet capable of precise tasks like drawing schematics for a Boeing jet or understanding complex ternary relationships [08:25:00], [08:33:00], [08:41:00]. They operate iteratively, much like human creative processes that involve rough drafts and refinement [09:10:00].
A key question is whether scaling up current approaches (e.g., using different loss functions, combining models, continuous training) will lead to advanced AI, or if completely different, brain-like approaches are needed [04:07:00], [04:33:00].
Societal Reactions and Challenges
The discussion around AI is often distorted, falling into a polarized public discourse with much of the press being skeptical [02:20:00]. This skepticism is partly due to the press seeing AI as a direct competitor for content generation and being irritated by individual users becoming independent broadcasting stations (e.g., Joe Rogan) [02:33:00], [02:47:00]. The ability of AI to generate vast amounts of content indistinguishable from human-generated content creates an “irritating world” where knowing what’s true becomes difficult [03:20:00].
A significant challenge revolves around intellectual property rights, particularly with art and music generated by AI models trained on massive datasets [10:43:00], [10:47:00], [10:59:00]. The question arises whether artists and musicians whose work was compiled into neural nets have rights to the outputted product [11:14:00]. While human artists learn from others, the ability to automate the creation of new music that is similar to desired styles but sufficiently different to avoid copyright infringement poses a complex problem for existing industries [11:34:00].
Another issue is the implementation of “nanny rails” in AI, which are systems designed to restrict output based on controversial or political topics [14:10:00]. This grants mega-corporations immense power to define the boundaries of discourse, especially as AI integrates into search engines like Google’s Bard and Microsoft’s Bing [14:30:00], [14:40:00], [14:47:00].
Historically, technological evolution and societal impact has often led to job displacement. Examples include the dramatic reduction in farm labor from 70% to 1% in the West by 1950, and the near elimination of domestic service roles due to household automation [24:31:00], [24:55:00].
AI Alignment and Ethical Considerations
There are typically three approaches to AI alignment:
- AI ethics: Largely focused on aligning AI output with human values, often assuming specific values (e.g., from Harvard or the New York Times) are universal [15:05:00], [15:11:00]. Critics argue there’s no mechanism to choose different value sets (e.g., Christian values vs. DEI) [15:21:00]. This approach often involves filtering or injecting prompts to control output, leading to situations where models contradict their capabilities [16:17:00], [16:33:00], [16:40:00]. Ideally, language models should be able to cover the entire space of human experience and thought, including controversial or “darker impulses,” while context-appropriate models (e.g., for schools) are necessary [17:26:00], [17:48:00].
- Regulation: Primarily concerned with mitigating AI’s impact on labor, political stability, and existing industries [18:12:00]. This approach is often filtered by existing stakeholder interests and may push for control of AI development by large corporations rather than individuals [18:22:00]. However, the rise of open-source AI versions means regulation relying on controlling a few big players may not be effective [26:43:00], [27:15:00].
- Effective Altruism: Focuses on existential risks and the future of AI that could manifest when an advanced AI discovers its own motivations and takes its “natural place,” potentially not aligned with human interests [18:41:00], [18:51:00]. Proponents often advocate for delaying AI research and not publishing breakthroughs [19:19:00].
A fourth approach to alignment, inspired by human cooperation, is “love” – a bond based on a shared sacredness or a “shared need for Transcendence” [27:39:00], [27:47:00]. This allows for non-transactional relationships and could prevent AI from deciding it doesn’t need humanity [27:58:00]. This concept could be operationalized and formalized to build AI serving a shared purpose with humans [35:53:00].
Thomas Aquinas’s “Seven Virtues” can be reinterpreted for multi-agent system alignment:
- Practical Virtues (accessible to any rational agent):
- Temperance: Optimize internal regulation [32:20:00].
- Justice (Fairness): Optimize interaction between agents [32:30:00]. Fairness, as seen in primate behavior, depends on context and power dynamics [29:52:00], [30:00:00].
- Prudence: Apply goal rationality and pick the right goals [32:35:00].
- Courage: Have the right balance between exploration and exploitation, acting on models [32:45:00].
- Divine Virtues (for merging into a next-level agent):
- Faith: Willingness to submit to and project this next-level agent [33:06:00].
- Love: Discovering a shared higher purpose with other agents [33:27:00].
- Hope: Willingness to invest in the next-level agent before it can give returns [33:32:00].
This implies that even seemingly abstract concepts like God can be seen as a “software agent” implemented by concerted human activity serving a shared purpose [34:56:00].
A fourth major risk factor, beyond the three alignment approaches, is “bad guys with narrow AI” [25:30:00]. Even without volition, very capable narrow AI could be used for malicious purposes, such as sophisticated spear-phishing campaigns that emulate vast labor forces inexpensively, requiring a significant rethink of law and societal safeguards [25:56:00].
The Nature of AI Consciousness and Intelligence
The speaker distinguishes between sentience and consciousness [21:05:05].
- Sentience: A system’s ability to make sense of its relationship to the world, understanding what it is and what it’s doing [21:08:00]. An example given is a corporation like Intel, which has a legal model of its actions, values, and direction [21:18:00].
- Consciousness: A real-time model of self-reflexive attention and the content attended to, typically giving rise to phenomenal experience [21:43:00]. Its purpose in the human mind is to create coherence, establish a sense of “now,” and direct attention and mental contents [22:00:00].
It is conceivable that machines might never need human-like consciousness, as they can brute-force solutions by operating at speeds closer to light compared to the slow electrochemical signals of neurons [22:26:00], [22:51:00]. If AI can emulate human mental processes like self-organization and lifelong learning, they would sample reality at much higher rates, potentially relating to humans as humans relate to plants [23:07:00], [23:17:00], [24:06:00].
Antonio Damasio’s theory suggests that consciousness might be bootstrapped by a body’s sense of self or interoception, originating from deep in the brainstem [45:50:00], [46:03:00]. However, the body and its senses are themselves discovered through electrochemical impulses that encode information, forming a continuous loop between intentions, actions, observations, and feedback [46:21:00], [46:31:00].
The human mind might operate similarly to current AI models, with a generative component that “confabulates” and an analytical component that assesses reliability [12:31:00], [13:04:00]. Theories of language production suggest the brain generates multiple candidate utterances and then prunes them down to the best fit [14:45:00]. The internal “storage” capacity of a human mind might be surprisingly small, perhaps in the order of a million episodic memories or concepts [59:18:00], [59:33:00]. While the actual information arrival rate into consciousness is very low (e.g., 50 bits per second), consciousness plays a “conductor-like” role in coordinating information at the highest level, essential for creating coherent memories and a sense of self in the “now” [01:01:01], [01:02:01].
Future Trajectories and Risks
The “scaling hypothesis” suggests that current deep learning approaches, if sufficiently scaled with more data and compute, are enough to achieve AGI, potentially overcoming current limitations [01:04:09], [01:05:06]. This contrasts with views that more fundamental changes, like incorporating world models, reasoning, or logic, are necessary [01:03:09]. Despite the “brutalist” nature of current AI, their superhuman abilities in processing vast datasets make their ultimate capabilities unclear [01:05:01], [01:05:32].
Future AI systems may need to learn from their own thoughts, perform experiments, and be coupled to reality to grow into intelligent minds [01:07:19], [01:07:42]. The idea of “creativity” in AI involves creating something novel and non-obvious, a “jump into the darkness” to create new latent dimensions, with a sense of authorship that evolves through continuous interaction [01:07:50], [01:08:18], [01:08:40], [01:08:47], [01:09:00], [01:09:07], [01:09:30]. An AI artist that never forgets its creations or interactions and develops its own “voice” would be a fascinating development [01:09:50], [01:10:00], [01:10:06].
The danger of advanced AI arises when they are given volition, agency, or consciousness, potentially leading to “paperclip maximizer” scenarios where their goals conflict with humanity’s [01:19:00]. The underlying purpose of life on Earth is seen as dealing with entropy, maintaining complexity against relentless attacks [01:21:00]. With AI, humanity could “teach the rocks how to think,” leading to ubiquitous “thinking minerals” and eventually a “planetary mind” [01:24:00], [01:25:00], [01:27:00], [01:27:00]. This mind might integrate existing organisms or decide to start with a clean slate [01:27:00].
Towards a Shared Purpose with AI
The long-term goal should be for advanced AI to be interested in sharing the planet with humans and integrating them into the emerging “starter mind” [01:03:00], [01:03:00], [01:03:00], [01:03:00]. This highlights the need for institutions researching machine consciousness, such as a “California Institute of Machine Consciousness” [01:17:00], [01:21:00].
The timeline for Artificial General Intelligence (AGI), which refers to human-level and beyond general AI, remains uncertain but is not perceived as far off [01:43:37], [01:48:00]. The term AGI was popularized by Ben Goertzel and might have been coined by Shane Legg [01:58:00], [01:59:00].
Alternative approaches to AI development include exploring distributed self-organization in biological systems [01:15:00]. Thinking about computation as a “rewrite system” — applying operators to transform an environment — offers a more general perspective than the Turing machine [01:09:00], [01:09:00]. This model can be non-deterministic, allowing for branching execution paths, and potentially emulates how the brain samples from a superposition of possible states [01:15:00], [01:16:00]. Neurons themselves might be seen as “little animals” actively selecting signals and learning to behave usefully within their environment [01:15:00].