From: allin

A panel of California judges recently ruled that Section 230 of the Communications Decency Act does not protect TikTok’s algorithm, raising significant questions about the liability of online platforms for content promoted by their algorithms [00:52:30].

What is Section 230?

Section 230 of the Community Decency Act grants internet platforms featuring user-generated content immunity from being sued over content published by those users [00:53:32]. This means platforms like YouTube, TikTok, and X are generally not liable for content posted by their users [00:53:38]. The intent is to treat social networks as distributors, not publishers, of user-generated content [00:58:58].

The Blackout Challenge and TikTok Case

In late 2021, a 10-year-old girl in Pennsylvania tragically died while participating in a “Blackout Challenge” she encountered on TikTok [00:52:53]. This challenge encourages users to choke themselves with objects until they pass out and has been linked to the deaths of 15 young people [00:52:59]. The child’s mother sued TikTok, arguing that the platform’s algorithm served these videos to her child, making TikTok responsible [00:53:13].

Historically, algorithmic decisions were protected under Section 230 [00:53:20]. However, an appeals court reversed this, with the judge stating that TikTok’s algorithm represents a “unique expressive product” that communicates to users through a curated stream of videos, thus reflecting “editorial judgment” [00:53:55].

Connection to SCOTUS Ruling

This new ruling specifically cites the Supreme Court’s recent decision in Moody versus NetChoice [00:54:14]. In that case, the Court unanimously vacated a Florida law that banned tech companies from deplatforming political officials, seen as a win for free speech protection and moderation rights in big tech [00:54:18]. The irony lies in the same ruling affirming big tech’s First Amendment protections in content moderation potentially nullifying Section 230 immunity for algorithmic recommendations [00:54:37].

Arguments for Algorithmic Liability

Arguments supporting algorithmic liability posit that algorithms act as modern-day editors:

  • Algorithms as Editors Algorithms are effectively mathematical equations of variables and weights, akin to a traditional editor’s mental model for curating content [00:54:56]. This means algorithmic decision-making can be considered an editorial decision [00:55:34].
  • Hooking Users Algorithms are designed to iteratively improve at “hooking people on whatever is trending,” and companies are often aware of these trends [00:56:43]. They “created a net” to amplify viral content [00:57:13].
  • Momentum Focus Algorithms focus on momentum, quickly shifting what they show based on user interaction, leading to “hundreds of iterations an hour” [01:05:34]. This core structure means that if actors in the system wanted to create momentum in a certain direction, the algorithms would amplify it, potentially before being caught [01:06:11].
  • Social Impact Powerful algorithms can cause dopamine deficiency and depression in children by providing constant hits, reducing their ability to find joy in real-world interactions [01:07:14].

Arguments Against Algorithmic Liability

Counterarguments against algorithmic liability emphasize the distinction between algorithms and human editorial judgment, and the potential negative consequences of increased liability:

  • Previous Precedent Prior court decisions, including cases involving users recruited by ISIS through terrorist videos on social networks, concluded that algorithms did not “obviate” Section 230 protection [00:58:04]. The Supreme Court found social networks were not liable, treating algorithms no differently than a regular chronological feed [00:58:38].
  • User-Driven Content Unlike traditional editorial pages that promote specific viewpoints, algorithms are designed to give users “more of what you want” based on their interactions [01:00:00]. If a user interacts with “outrage porn,” the algorithm will show them more of it, not because the platform takes an editorial stance, but because the user’s clicks signal interest [01:00:36].
  • Risk of Censorship Making online platforms liable as publishers for every piece of user-generated content would lead to “very little free speech left” due to corporate risk aversion, forcing platforms to become “even more censorious” [00:59:06].
  • Algorithms Reflect User Preferences Algorithms often show users content they genuinely like, such as mountain biking videos, not necessarily content that incites anger or outrage [01:05:07].

Proposed Solutions and Broader Implications

Several solutions and broader implications were discussed:

  • Evolving Section 230 Instead of abolishing Section 230, some suggest evolving it [01:01:40].
  • User Choice of Algorithms Platforms could offer users a choice of algorithms (e.g., default, education-leaning, music-leaning, or no algorithm) or even allow users to bring their own algorithms [01:02:00]. However, some argue that consumers are primarily drawn to content that “incites emotion,” making choice of algorithms less effective in changing user behavior [01:03:00].
  • Transparency/Open-Sourcing Algorithms Opening up the algorithm, as X (formerly Twitter) has done, allows the public to see if it’s biased or if editorial opinions are being inserted [01:04:35].
  • Parental Supervision and Age Restrictions A more direct solution might be restricting access for young users, such as banning social media for those under 16 or 17, and preventing phones in schools [01:06:58].
  • Winning vs. Labels The discussion highlights that successful companies are built by leaders who can identify unique paths and adapt to changing conditions, regardless of whether they are “founders” or “managers” [02:51:57]. The ability to distinguish between “stupid outcomes” and “smart outcomes” is crucial, and attributing failure to a simple label like “manager mode” rather than taking responsibility is problematic [03:51:00]. Ultimately, the goal is “winning,” which requires talented, driven, and accountable individuals [03:59:36].