Section 230 and the Future of Social Media

0
297

By Philip Baillargeon

Zuckerberg testifies before a House Subcommittee concerning antitrust regulations and his operations as CEO of Facebook

Source: Graeme Jennings—Pool/Getty Images

Seemingly out of thin air, two monumental lawsuits were quietly filed early in December. They both concern the social media giant Facebook and accuse the company of engaging in anti-competitive behavior, a potential violation of antitrust laws that could strip platforms like Instagram and WhatsApp from its control. A bipartisan slate of attorneys general and the FTC have aligned in their interest to take steps to police one of the largest, most powerful social media conglomerates in the world, which may seem odd to those familiar to the high level of political polarization our country is currently engulfed in. However, upon closer inspection, this rebuke of Facebook is less about the spirit of competitive industry and more about the complex puzzle of social media and appropriate government regulation. A brief primer on Section 230, the foundational law concerning social media that was developed nearly thirty years ago, is necessary to explain why two opposing sides are uniting against a common enemy (with completely different terms of surrender).

Technically, Section 230 is a portion of the Communications Decency Act of 1996, and its aim was not to relegate any forums or communications platforms of any kind (the modern social media platform would not be created until 1997). The goal of this act was to relegate explicit materials online and to give corporations, not government, the ability to regulate their own platforms. While many other parts of the Telecommunications Act that included all of these reforms would not hold up in court, Section 230 was still allowed to stand unaltered. 

The specific language of the law states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. This is why an average citizen can’t sue Facebook because someone posted a nasty photo without their permission or lies about them publicly; Facebook does not have any responsibility for anything that exists on their platform that was posted by an individual. The intent of the law was then to allow moderators of the website to censor explicit or otherwise harmful material, however, the law does not provide any incentive for a given company to do so. This is the state our Internet is in today; large tech companies have no incentive to regulate hate speech, and they cannot be held accountable for ignoring it.

The lawsuits that are being processed today are a sort of “warning shot”; Section 230 is on the chopping block, and without these sweeping liability protections companies like Facebook would have to significantly rework their platforms and face numerous more legal challenges as they struggle to adapt. However, what to replace Section 230 with is an entirely separate argument. Democrats like President-Elect Joe Biden have largely committed to replacing these liability protections with regulations requiring these companies to take an active role in removing hate speech and falsehoods under threat of legal retaliation. Republicans like President Donald Trump, who encouraged Congressional Republicans to withhold funding for the military until Section 230 is eliminated (to no avail), believe that all social media companies are censoring conservatives and wish to retaliate against them by removing all protections for social media companies and eliminating their ability to regulate speech at all.

Section 230 is on its way out, however, what follows is certain to be a period of relative anarchy in its wake. The balance of government regulation of levels of liability protection, depending on which party’s solution wins out, could shift the Internet to a very different place.