by Alexa Elder
Section 230 of the Communications Decency Act was passed in 1996. Considered by some as the “most important law protecting internet speech,”[i] § 230 consists of two key provisions. The first provides that internet companies shall not be treated as the “publisher or speaker” of content posted on their sites by third-party users. The second provides that internet companies cannot be held liable for good-faith efforts to filter or block third-party content. In sum, § 230 provides online platforms with immunity from civil liability based on third-party content and for the removal of such content.
Since the passage of § 230 in 1996, the number of internet users has grown from approximately 20 million to 4.66 billion. Undoubtedly, this growth can be attributed in large part to the broad protections afforded under § 230 which have encouraged entrepreneurs to invest and innovate online platforms; some deem § 230 as the “twenty-six words that created the internet.”[ii] Those 26 words are:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.[iii]
However, § 230 has faced its fair share of criticism from both the left and the right, becoming a key political issue particularly in the past year. President Trump has called to repeal the law and recently threatened to veto the National Defense Authorization Act (NDAA) if § 230 is not revoked. Prior to the election, President-elect Biden declared that § 230 should “immediately be revoked, number one.” However, while those on both sides of the political spectrum support reforming § 230, the parties fundamentally disagree on why § 230 is a problem and how to fix it.
One critique is that § 230 provides too much protection to internet companies that moderate, or as some would say, censor, third-party content on their websites. This view holds that internet companies should not be permitted to freely discriminate by removing content deemed unverified. Earlier this month, for example, YouTube announced that it will remove certain political speech videos disputing Biden’s election victory. Such concerns – including how tech platforms handled the presidential election – were addressed at a Senate Judiciary Committee hearing that included testimony by Jack Dorsey, the founder of Twitter, and Mark Zuckerberg, the founder of Facebook.
In response, a recent study released by POLITICO disagreed that conservative viewpoints are being censored.[iv] Rather, the study revealed that online conversations involving some of the most controversial topics in 2020 were dominated by conservative voices. The study indicated that right-wing social media posts generated in response to George Floyd’s death were shared ten times more than liberal generated posts; right-wing posts about election fraud were shared twice as often as left-wing posts. Still, data may prove difficult to obtain. For example, at the recent Senate hearing, the question – “How many times have you blocked Republican officeholders? How many times have you blocked Democratic officeholders?” – essentially was met with a “we’ll certainly look into it” response from the two founders. [v]
Consequently, bipartisan concerns exist about a perceived lack of transparency by the tech industry. Earlier this year Senators John Thune (R) and Brian Schatz (D) introduced a bipartisan Section 230 bill known as the Platform Accountability and Consumer Transparency Act, or PACT, that focuses “more on exposing the process rather than changing it.”[vi]
Another critique of § 230, generally of a more liberal view, is that it provides technology companies with too much protection. Consequently, according to this view, internet companies have little incentive to regulate hate speech and false information posted on their online platforms. Proponents of this view contend that other industries are subject to laws and regulations. Why is the internet an exception? According to Mary Anne Franks, a law professor at the University of Miami, “if the conduct would not be speech protected by the First Amendment if it occurs offline, it should not be transformed into speech merely because it occurs online.”[vii]
This position is understandable when one considers cases like Matthew Herrick’s, who was stalked and harassed by his ex-boyfriend through Grindr, an online dating app. Herrick’s ex-boyfriend used Grindr to create profiles impersonating Herrick and arranged for over a thousand men to come to Herrick’s home and workplace to engage in drug usage and role-play rape fantasies; activities that Herrick’s ex had included in the fake profile. Herrick reported the fake profile to Grindr over 100 times but the profile remained and the harassment continued. Herrick sued Grindr and demanded that Grindr remove the profiles targeting him and prevent similar impersonating profiles from being posted. The Southern District of New York dismissed the case, finding that Herrick’s claims were all “inextricably related to Grindr’s role in editing or removing offensive content—precisely the role for which Section 230 provides immunity.”[viii] On appeal, the Second Circuit upheld that judgment, agreeing that Herrick’s claims were barred under § 230.
However, some warn that amending § 230 could have unforeseen consequences. While Democrats have suggested that the easiest way to reform § 230 would be to leave the law largely intact and to create “carve-out” exceptions for online activities that are particularly egregious, similar “carve-out” exceptions have previously done more harm than good. For example, two years ago § 230 was amended to remove immunity for internet companies that allowed unlawful sex advertisements to be posted on their platforms. While the 2018 “carve-out” exception sought to stop or reduce the rate of sex trafficking, after its passing cities across the country reported a steep rise in prostitution-related arrests. “Instead of ridding the Internet of sex trafficking, [the amendment] likely drove it to the nether reaches of the dark Web.”[ix]
Despite these concerns, however, the reform of § 230 seems inevitable. Although the parties may disagree on the issues surrounding § 230 and appropriate remedies, both sides overwhelmingly support reforming the law in some capacity. While the exact changes and their impact are still to be seen, it seems likely that the manner and extent to which immunity is afforded to internet companies under § 230 is likely to change in the foreseeable future.
[i] Casey Newton, Everything you Need to Know About Section 230, THE VERGE, (May 28, 2020), https://www.theverge.com/21273768/section-230-explained-internet-speech-law-definition-guide-free-moderation
[ii] JEFF KOSSEFF, THE TWENTY-SIX WORDS THAT CREATED THE INTERNET (2019).
[iii] 47 U.S.C. § 230(c)(1)
[iv] Mark Scott, Despite Cries of Censorship, Conservative Dominate Social Media, POLITICO, (Oct. 26, 2020), https://www.politico.com/news/2020/10/26/censorship-conservatives-social-media-432643
[v] Press Release, Cruz: Democrats and Big Tech’s Desire to Silence Dissent is a Dangerous Totalitarian Instinct (Nov. 17, 2020), https://www.cruz.senate.gov/?p=press_release&id=5470
[vi] Devin Coldewey, PACT ACT Takes on Internet Platform Content Rules With ‘a Scalpel Rather than a Jackhammer’,TECHCRUNCH(June 24, 2020), https://techcrunch.com/2020/06/24/pact-act-takes-on-internet-platform-content-rules-with-a-scalpel-rather-than-a-jackhammer/
[vii] Sue Halpern. How Joe Biden Could Help Internet Companies Moderate Harmful Content. The New Yorker, (Dec. 4, 2020), https://www.newyorker.com/tech/annals-of-technology/how-joe-biden-could-help-internet-companies-moderate-harmful-content
[viii] Herrick v. Grindr, LLC, 306 F. Supp. 3d 579, 588 (S.D.N.Y. 2018), aff’d, 765 F. App’x 586 (2d Cir. 2019), cert. denied, 140 S. Ct. 221, 205 L. Ed. 2d 135 (2019).
[ix] Halpern, supra note vii.