From: mk_thisisit
Max Fisher, a journalist for the New York Times, has extensively investigated how social media algorithms contribute to chaos worldwide [01:08:08]. His book, “In the Modes of Chaos,” explores these conclusions, suggesting that social media can lead to violence and death [01:13:13].
Social Media’s Harmful Impact
Fisher initially held skepticism about the notion that social media could lead to deaths [02:04:04]. However, after years of study and observing its real-world impact, he believes it can indirectly cause fatalities [02:07:07]. While the effect of social media on an individual might be small, its daily operation across billions of people amplifies its impact [02:31:31]. It pushes politics in a more dangerously divisive or hateful direction [02:35:35].
Notable instances where social media has been linked to significant loss of life include:
- The genocide in Myanmar [02:42:42]
- Violence in Sri Lanka [02:44:44]
- Communal violence in India [02:46:46]
These events saw a “significant number of people died as a result of racist violence or other violence that was triggered on social media and would not have occurred if it were not for the existence of these platforms” [02:51:51]. In some extreme cases, these platforms have even admitted to their role [03:01:01].
The Role of Algorithms
Social media companies like Facebook are designed to maximize user engagement, primarily for financial gain [05:17:17]. Their algorithms select and order content to achieve this goal [05:10:10]. Due to human nature, content that is most stimulating and keeps users on the platform includes:
- Fear of others [05:39:39]
- A sense of “us versus them” [05:41:41]
- Conspiracy theories [05:42:42]
- Hate speech [05:46:46]
- Social or collective outrage [05:48:48]
Internal researchers at these companies have repeatedly provided evidence that these platforms lead to racist hatred and violence [00:06:06], [05:52:52]. For example, Facebook researchers warned about the situation in Myanmar, detailing how platforms promoted specific conspiracy theories, hate speech, and religious and racial incitement [04:17:17].
A historical example is the introduction of Facebook’s News Feed in 2006. While most users were content, a small percentage disliked it due to perceived loss of privacy [01:21:21]. Anti-News Feed and anti-Mark Zuckerberg groups formed, and due to the algorithm promoting “social outrage,” information about these groups appeared widely, creating an illusion of majority anger and encouraging conformity [01:40:40]. Zuckerberg’s response acknowledged that the News Feed triggered engagement and traffic, leading to its permanence [01:34:34]. This incident, seen by some within Facebook as a “funny moment,” was in fact a “huge wave of outrage and a kind of massive disinformation” [01:12:12].
Responsibility of Leadership
While creators like Mark Zuckerberg did not intend to harm people, believing they were creating something helpful [03:36:36], the management of these companies had evidence from their own researchers about the platforms’ destructive potential [03:42:42]. The fact that Mark Zuckerberg was aware of the evidence, even in specific cases where thousands of deaths were predicted, yet took no action, makes the companies responsible for the deaths they failed to prevent [06:01:01].
Regarding Elon Musk’s ownership of Twitter (now X), Fisher suggests Musk is primarily seeking approval from a user base that includes “deeply conservative trolls and gamers” [02:43:43]. Musk’s stated intention to make Twitter a “radically free speech platform” often translates to reinstating “trolls and voices of the Far Right” [02:02:02].
Addiction and User Behavior
Social media applications are notably difficult to put down [09:33:33]. Companies like Facebook and Google (which owns YouTube) have hired top minds in artificial intelligence and computer programming to make users addicted to their platforms [09:43:43]. Max Fisher admits to being addicted to his phone, with social media being a significant part of that addiction [09:22:22].
Users often exhibit a “hypocritical” behavior, criticizing social media while using it daily [08:30:30]. Fisher notes that these platforms have become so effective that it’s “impossible to move in the modern world without using these platforms” [10:10:10].
Potential for Regulation
Europe has been more proactive in implementing regulations, starting with GDPR concerning data handling [02:06:06], and now working on regulations regarding artificial intelligence [02:10:10]. In contrast, the United States lacks social media regulation [02:14:14].
There is a growing consensus that current regulations, often focused on content moderation, are insufficient [02:38:38]. The fundamental issue is that platforms are designed to artificially promote content that fosters division and hatred [02:58:58]. Therefore, the only effective way to regulate is by addressing the design of the systems themselves, specifically by impacting algorithms and content promotion [02:11:11].
Impact on Democracy
Social media significantly accelerates political polarization [03:08:08]. While the crisis of democracy began before social media’s widespread presence, these platforms undeniably worsen it [03:12:12]. They play a clear role in shaping public opinion, perception of reality, news priorities, and how individuals connect with their identity and politics [03:26:26].
The 2016 U.S. presidential election highlighted the impact of disinformation and platforms like Bright Bart News, a far-right source amplified by algorithms due to its ability to encourage engagement [01:13:13]. When platform algorithms were changed, Bright Bart News’ audience dramatically declined, demonstrating the artificial amplification previously at play [01:25:25].
Future of AI in Journalism
While current artificial intelligence tools like GPT chat are not yet directly useful for investigative journalism, their potential for processing and analyzing vast amounts of information is promising [02:01:01]. AI could assist in finding obscure documents or government files, especially in foreign languages [02:09:09].
However, AI is currently limited to processing existing text and copying old ideas [02:35:35]. It cannot propose original ideas, new article developments, or story concepts [02:55:55]. While media might use AI to generate “raw copies of articles” for human refinement in the future, large language models lack the ability to create [02:08:08]. Furthermore, AI cannot make contact or get information from confidential sources, which remains crucial for investigative journalism [02:01:01]. The Writers Guild of America is currently on strike, demanding rules prohibiting AI use in creating content for TV shows and movies [02:53:53]. This highlights concerns about the ethical and societal impact of AI development.