US Supreme Court Hears Oral Arguments in Taamneh v. Twitter, the Other Section 230 Case

The US Supreme Court is hearing another Section 230 case. This time, it’s Taamneh v. Twitter.

Yesterday, we reported on the US Supreme Court hearing oral arguments on the Gonzalez v Google case. In that case, a question before the courts is whether YouTube can be held liable if their automatic algorithms happen to recommend terrorist content uploaded by users, that they can be held liable. This regardless of obvious Section 230 protections. As we noted, the early signs show some promise that Section 230 may actually prevail, though we are also in the early stages and it’s more than possible that something awful can come from that case.

Of course, Gonzalez v Google isn’t the only case currently before the US Supreme Court trying to poke holes in the critical Section 230 protections. There is another case known as Taamneh v Twitter. That case got underway today with oral arguments.

Like Gonzalez v Google, Taamneh v Twitter is targeting Section 230 protections. In this case, though, the question is whether a platform can be held liable if they are already actively trying to moderate and remove such content. Specifically, if terrorist content was found on the platform, can they be held liable under anti-terrorism legislation, forgoing Section 230 protections.

This type of question is, of course, nothing new. There has been plenty of debates in the past about whether moderation means you are liable for any content that was missed by moderation. After all, if it was left up, then can a service be held liable? These questions date back to the 90’s and, currently, the legal answer is that, no, the service isn’t liable. If the service is making good faith attempts at moderating content, then that doesn’t mean that the service is automatically liable for anything that may have been left up.

The problem with holding services liable for content posted by their users if it’s moderated is that moderation now suddenly means added legal liability for the service. This leaves only two options for services: either hardcore moderate and remove content that could even be remotely deemed offensive in some way or have an absolute free-for-all. There would be no middle ground there. What’s more is that either side of the extreme is not a particularly attractive for the end user anyway.

That thinking played a role in where we are legally today. Moderation doesn’t mean the service is liable. This case, however, seems to seek to upend that and say that terrorist content being left up means that the service is liable.

The hope here is that the US Supreme Court sides with the defence and says that, no, Section 230 does apply.

This case also is rooted in the fact that someone died in a particular attack. While that is unfortunate, holding platforms liable for such content isn’t going to bring the victim back, nor is it going to make things better for anyone involved.

At any rate, this case is also a major threat to the free and open internet. A bad ruling here could greatly alter the course of internet history – and not for the better.

Drew Wilson on Twitter: @icecube85 and Facebook.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top