February 21st, 2023, will be a monumentally important day for the future of the internet. While most would like the date to be associated with the release of the next great piece of technology or innovative product, it’s the date that the United States Supreme Court will hear arguments in Google vs. Gonzalez, the first case covering Section 230 liability protections. The second case, Twitter vs. Taamneh, will be heard the following day, teeing up a blockbuster week in the nation’s highest court for the future of innovation in the United States.
In Google vs. Gonzalez, plaintiffs are arguing that social media platforms owned by Google and other large technology firms “allowed ISIS to post videos and other content to communicate the terrorist group’s message,” and the algorithm pushed content that allowed ISIS “to radicalize new recruits, and to generally further its mission.” As such, according to the plaintiffs, the platforms “were directly and secondarily liable for ISIS’s acts of international terrorism,” namely the attacks in Paris, San Bernadino, and Istanbul. In the 9th Circuit Court of Appeals, judges ruled that Section 230 of the Communications Decency Act “bars most of the Gonzalez Plaintiffs’ claims” as they could not be held liable for third-party content posted by users.
For many, what an algorithm is and what it does is a mystery. In the context of social media, an algorithm “is a set of rules and signals that automatically ranks content on a social platform based on how likely each individual social media user is to like it and interact with it.” Algorithms work off knowing what content users have previously engaged with and how they have engaged with specific content. For example, when a user “likes” dog videos, the algorithm will push similar content to their feeds. Because of algorithms, social media platforms are able to filter out content users don’t want to see and push them to the content they do want to see, creating a bespoke and unique experience.
Without algorithms, users’ feeds would be cluttered with content they are not interested in, degrading the value and overall experience of these platforms.
Ordinarily, the Gonzalez ruling would have been the end of litigation, had it not been for a conflicting decision in Twitter vs. Taamneh. In that case, the plaintiffs convinced judges that Twitter and other social media platforms failed to remove accounts associated with ISIS and failed to remove “content posted by supporters of such organizations.” By not removing these posts and accounts, the judges ruled that social media platforms “could be liable for aiding and abetting an act of international terrorism.”
As a result of the conflicting rulings, the Supreme Court has been forced to consider the limits of Section 230 liability protections, specifically whether they apply to algorithms, content moderation, and international terrorism. The consequences of the court’s decision will reverberate for decades and could reshape the internet as we know it.
If any piece of legislation has been subjected to bipartisan vitriol over the past few years, it’s Section 230 of the Communications Decency Act. Originally passed in 1996, Section 230 was passed with the goal of promoting “the continued development of the Internet and other interactive computer services,” preserving “the vibrant and competitive free market” for digital services, and maximizing user control over the content they consume. To accomplish this, Congress determined that websites cannot be treated as publishers of online content and cannot be held liable for content moderation decisions taken concerning “material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” These provisions ultimately paved the way for a free and open internet that millions of Americans enjoy every hour.
Conservatives have been especially critical of Section 230, arguing it has been weaponized by social media platforms to censor their speech online. Such critics have led to legislative calls to repeal or change Section 230 and remove important protections that keep the internet free and open. It is, however, important to consider the consequences of the Supreme Court narrowing or removing Section 230 protections.
If the Supreme Court rules against Google and Twitter, and removes or narrows or ends Section 230, all social media platforms and websites that allow users to post will be forced to make a choice. Their first option would be to moderate less, allowing more objectionable content to be posted. Unfortunately, moderating fewer posts risks disincentivizing advertisers from purchasing digital ads, fearing their brands might become associated with content that does not align with their vision or values.
When Elon Musk completed his takeover of Twitter, for example, several large companies removed their adverts from the platform, sparking concerns about the overall financial stability of the company. For social media platforms and websites that are increasingly reliant on advertising revenue for survival, the loss of digital advertising could prove a death knell.
Alternatively, the more likely outcome is that social media platforms and websites increase their content moderation practices, abandoning the current light touch in favor of a more heavy-handed approach. With no liability protection, any content posted could result in a lawsuit, meaning every post would have to be reviewed to ensure it complies with the platform’s terms and conditions or is not “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”
As a result of this increased content moderation, the internet would be considerably less free than it is today with stifled speech and significant delays in posting as content is reviewed. Imagine it taking days to post a tweet, update a Facebook status, publish photos to Instagram, or receive breaking news. The internet simply would not be the same.
These are not abstract concerns but were experienced by Australians last year when the country’s highest court ruled Google had defamed a politician by failing to remove videos published on Youtube. While the decision was eventually overturned, CNN closed its Facebook page, and a number of lawmakers were forced to make their pages read-only. As a result, Australians lost access to a news site, as well as the ability of constituents to communicate with their legislators.
Narrowing or ending Section 230 protections would also make the internet a less competitive place by cementing the dominance of existing well-established platforms. Smaller platforms that lack vast capital resources will simply be unable to meet the costs associated with litigation or will be unable to employ the number of content moderators to ensure content posted is not objectionable Larger platforms, with greater capital resources, will, however, be able to meet these demands and face less competition. As such, ending Section 230 will only force users onto current platforms, denying them the opportunity to access nascent competition.
Social media platforms like Parler and Truth Social, which were created as an alternative to incumbents, have depended on the existence of Section 230’s protections, ensuring they are not liable for content posted by others on their platforms. Without these protections, Parler and Truth Social could not exist, and conservatives would be forced to use sites they feel are suppressing their views.
With two cases before the Supreme Court, the future of a free and open internet is now in the hands of the nine justices. While Conservatives may bemoan Section 230, any narrowing or ending of protections would fundamentally reshape the internet as we know it, making it less free, less competitive, and denying Americans the opportunity to use alternative platforms that depend on critical legal protections to exist.