skip to main content

Court Rules TikTok Not Protected by Section 230

Algorithmic feeds like TikTok can now be found liable for harm they cause

In August 2024, an appeals court overturned a lower court’s ruling and found that TikTok could indeed be found liable for the death of a child that died after following a dangerous trend that other TikTok users had posted on the platform. The lower court had determined that, like other platforms that host user-generated content, TikTok was protected by Section 230 and couldn’t be sued due to content posted by its users.

This ruling will have a major impact on internet users everywhere, and will change the way that companies like Google, Facebook, and Amazon operate.

What is Section 230?

Section 230 of Title 47 of the United States Code, often referred to as just “Section 230” or as part of the Communications Decency Act, is a law passed in 1996 that protects freedom of expression online by protecting the platforms that enable online expression. The law states:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
-47 U.S.C. § 230(c)(1).

In other words, if someone posts something on Facebook that’s harmful, you can’t sue Facebook. The person who posts harmful content should be held responsible, not the service that hosts that content.

Section 230 has long been considered one of the most important pieces of internet legislation ever passed. Its protections for platform holders allowed the growth of Web 2.0 and the emergence of social networking sites like Facebook and Twitter.

Even back in 1996, Congress recognized that it would be impossible for online services to review the speech of every user before it was posted online. The law was passed with bipartisan support in order to protect the “true diversity of political discourse” and “opportunities for cultural development, and […] intellectual activity” that the internet provided.

Without Section 230, many of the biggest tech companies simply wouldn’t exist, or would be much more limited in scope. YouTube, for example, would have to painstakingly review every one of the millions of videos uploaded to its site every day or risk being sued out of existence by a landslide of lawsuits. This means that Google would either have to implement incredibly strict filters that only let bland, pre-approved content through, or it would have to start operating more like a traditional TV station that only accepts content from established companies and creators, rather than letting any user upload a video.

While undeniably important, Section 230 isn’t without controversy. Many politicians are opposed to the broad nature of the protections it affords tech companies. Many laws have been proposed over the years by both Republicans and Democrats that would limit these protections.

Is your internet connection fast enough for your online needs?

Enter your zip code to see what options are available in your area.

Why was TikTok sued?

The current lawsuit came about after TikTok users started posting videos of themselves choking themselves until falling unconscious, dubbing it the “Blackout Challenge.” These videos were pushed to children’s feeds, and several children died trying to participate in the trend. Parents then tried to sue TikTok in 2022, but the court initially found that TikTok was not responsible for the video in question.

While dangerous trends like this were brought up during Senate hearings, this lawsuit is unrelated to the Federal TikTok ban or the various lawsuits filed against the government challenging the ban.

Although the lower court ruled in favor of TikTok, the plaintiffs appealed this decision to the Third Circuit Court of Appeals, based in Philadelphia, Pennsylvania. Judge Patty Shwartz reversed the lower court, finding that TikTok’s algorithm is an expressive product and therefore is not protected by Section 230, which only protects platforms from harmful content posted by third parties.

What did the court rule?

It’s always concerning when there are legal or legislative challenges to Section 230 because it’s the foundation of most of what we do online. It’s so influential that it’s been called “The 26 words that made the internet.” Dismantling Section 230 could mean the end of not just video platforms like TikTok and YouTube, but also personal blogs, Amazon reviews, eBay auctions, dating apps, and pretty much every way that people express themselves online.

There are decades worth of case law affirming that even if a site hosts material that is fraudulent, illegal, or harmful, the platforms hosting that information can’t be held liable because they didn’t create the material. In the case of TikTok, however, Judge Shwartz found that the situation was different. She noted that the Supreme Court had ruled that “a platform’s algorithm that reflects ‘editorial judgments’ about ‘compiling the third-party speech it wants in the way it wants’ is the platform’s own ‘expressive product’ and is therefore protected by the First Amendment.”

If a platform’s recommendation algorithm is protected under the First Amendment as its own form of expression, that expression can’t also be protected as third-party content under Section 230. Therefore, Judge Shwartz remanded the case back to the district court to rule on the plaintiff’s remaining claims.

What does this mean for internet users?

This case has huge implications for internet platforms. Most platforms like YouTube, Instagram, and even Amazon have recommendation algorithms that suggest new videos, posts, or products to users. In fact, many have tried to make their suggestion algorithms more like TikTok’s to try and emulate the platform’s success. This, however, is exactly the issue highlighted in the TikTok case.

TikTok was not protected by Section 230 specifically because, as Judge Shwartz notes, “TikTok’s algorithm is not based solely on a user’s online inputs.” This means that all these major sites will have to either carefully police the type of content being suggested to users or remove their algorithmic feeds altogether.

This isn’t necessarily a terrible thing for users

Tech companies like algorithmic feeds because they can increase engagement and time spent on their apps. Engaging content doesn’t always mean content that users like, as offensive, upsetting, and controversial content often gets stronger reactions and more engagement from users than simply showing them the content they asked to see. Back in early 2023, Twitter users were up in arms because the app switched from showing them the people they followed in chronological order to a TikTok-like “For You” page that would show them tweets from people they didn’t follow. While suggestion algorithms can be helpful, they are often not what users actually want.

There are, of course, a lot of potential downsides to this ruling. Platforms might stop showing users political content or news of current events to avoid potential liability. They might even stop showing content from smaller creators and focus on big brands and organizations that already shy away from these topics.

The platform most impacted by this decision is TikTok

Not just because it can now be sued for damages in this case, but also because its entire business revolves around its recommendation algorithm. To be fair, TikTok’s attempts at moderation thus far have been pretty laughable. It’s common to see a video of frat boys injuring themselves while doing something stupid marked with a disclaimer that “The actions in this video are performed by trained professionals. Do not attempt.” It’s better than nothing, but it’s still obviously not correct.

It’s also not clear how extensive Judge Shwartz’s ruling will be applied. In the case of Jurin v. Google, Inc., all the way back in 2010, a court found that Google’s algorithmic suggestion of potentially trademark-infringing ad terms to advertisers was protected by Section 230 because suggesting a term but then leaving the choice up to the advertisers didn’t count as Google producing the allegedly infringing content. Judge Shwartz even pointed out that if the children harmed by the “Blackout Challenge” content had searched for the videos themselves, TikTok wouldn’t have been liable for their deaths.

It’s also important to note that this ruling comes in a period when many lawmakers have expressed an interest in eliminating Section 230 protections so as to enact harsher forms of censorship on online speech. On the other hand, big tech companies are pouring billions of dollars into AI models that, in many cases, blatantly steal the work of artists, authors, and other creators and pass them off as original work. While recommendation algorithms and large language models are very different types of AI, the products of either would be liable under this ruling.

Will we see major changes to suggestion algorithms? Will this lead to more censorship of user-generated content? It’s hard to say at this point, but it’s safe to say that this issue is far from resolved, and this case in particular will have a lasting impact on the way we use the internet.

Author -

Peter Christiansen writes about telecom policy, communications infrastructure, satellite internet, and rural connectivity for HighSpeedInternet.com. Peter holds a PhD in communication from the University of Utah and has been working in tech for over 15 years as a computer programmer, game developer, filmmaker, and writer. His writing has been praised by outlets like Wired, Digital Humanities Now, and the New Statesman.

Editor - Jessica Brooksby

Jessica loves bringing her passion for the written word and her love of tech into one space at HighSpeedInternet.com. She works with the team’s writers to revise strong, user-focused content so every reader can find the tech that works for them. Jessica has a bachelor’s degree in English from Utah Valley University and seven years of creative and editorial experience. Outside of work, she spends her time gaming, reading, painting, and buying an excessive amount of Legend of Zelda merchandise.