Roundtable #4 | Section 230 of the Communications Decency Act
Foreword
The profound concentration of power in a core hub of private tech companies is the most pressing question facing the free press in the United States. While platforms such as Google, Facebook, and Twitter have resisted being labeled as publishers, their decisions over what and what not to moderate increasingly mirror the domain of editorial organizations. Compounding the issue is these companies’ outsized influence as intermediaries, and even gatekeepers, for human expression. In general, legislators have exempted social platforms from responsibility for what their users publish. This Roundtable will explore the development of the law that cements this protection: Section 230 of the Communications Decency Act of 1996. Decades of court decisions have affirmed the robust immunity that Section 230 provides to internet companies. However, the question remains as to whether a clause, drawn up before the existence of most social media companies, can outlast the age of their rapid outgrowth.
Jessica Lin
Roundtable Editor, Spring 2020
Section 1: History of the Communications Decency Act: The Survival of Section 230
In 1996, the Communications Decency Act (CDA) was passed as part of the Telecommunications Act--the first attempt to regulate the Internet. Specifically, the CDA was a wide-sweeping effort to combat the proliferation of obscene material on the Internet. While the Supreme Court eventually struck down much of the CDA in Reno v. American Civil Liberties Union (1997), Section 230 of the CDA was spared because it provided immunity for internet service providers (ISPs) and users from tort liability. Since then, court decisions have laid out a three-prong test to classify online platforms, from discussion forums to online marketplaces. Backpage.com v. Dart (2015) solidified the following criteria for determining whether websites qualified for tort liability immunity: 1) Is the defendant a provider or user of an Interactive Computer Service, 2) Is the defendant an Information Content Provider?, 3) Does the liability claim treat the defendant as a publisher or speaker? [1]
The basis for claiming legal immunity stems from earlier precedents predating the birth of the Internet. In the case Smith v. California (1959), the Supreme Court struck down a Los Angeles city ordinance that criminalized the possession of obscene writing in places where books are sold, which established a distinction between active publishers (e.g. news outlets, book publishers, etc.) and neutral conduits (e.g. libraries, bookstores, etc.) with respect to tort liability. [2] Because the ordinance penalized possessors of obscene material who had no knowledge of criminal activity, the Court found that the city law violated the Due Process Clause of the Fourteenth Amendment. This clause stipulates that all levels of American government are responsible for the fair and equitable application of the law. Subsequent cases would expand upon the implications of Smith v. California for the freedom of expression: if distributors of information were liable even without knowledge of the contents of the books sold, then they would self-censor in order to avoid litigation. [3][4] In the words of the New York Times in the case Sullivan v. New York Times (1964), a law that encouraged self-censorship would “dampen the vigor and limit the variety of public debate” and would be “inconsistent with the First and Fourteenth Amendments.” [5] As such, libraries and other information distributors would gradually acquire protections against liability for third-party content, paving the way for the rapid proliferation of internet message boards and internet service providers that had begun to appear by the time of the early 1990s.
By the 1990s, the birth of the Internet and its rapid expansion had left a void of relevant and applicable legislation to regulate cyber platforms and user-generated content. The precedents for necessary regulation had already been established, but there existed no definitive framework by which certain websites or ISPs could be labeled as publishers or service providers -- content creators or mere hosts. This ambiguous situation would endure until two landmark cases, Cubby Inc. v. CompuServe Inc. (1991) and Stratton Oakmont, Inc. v. Prodigy Services Co. (1997), forced Congress to set guidelines on the application of defamation law to the Internet, establishing the foundation for Section 230 of the 1996 CDA.
In the case Cubby Inc. v. CompuServe Inc., CompuServe was found not guilty because its neutral non-intervention policy towards all content rendered it a service provider. In contrast, the subsequent ruling in Stratton Oakmont, Inc. v. Prodigy found Prodigy liable for libel committed by a third-party as the company had employed a good faith moderating team and was, therefore, a publisher of libelous content. [6][7] Congress quickly recognized the problematic incentive that these two cases had established: ISPs were encouraged not to edit content even if it were in obvious violation of the law, since site moderation would incur liability for user-generated content. In order to remove the disincentive for good-will content moderation, Congress passed Section 230 of the CDA to distinguish ISPs from publishers. Section 230 (c)(1) establishes immunity from liability for illegal or offensive third-party content, while Section 230 (c)(2) protects websites against civil liability should they engage in self-regulation of potentially harmful content. While the original text pertained to defamation liability, subsequent interpretations of Section 230 have expanded its scope beyond defamation law to include intellectual property laws, commercial law, and contract law. [8]
Despite Section 230’s continued role in protecting online platforms from third-party liability, it has not survived unscathed. Recently, intense bipartisan criticism of Section 230 has invited increasingly narrow interpretations of the three-prong test for CDA 230 immunity. Stricter court interpretations of Section 230 have focused on the necessity for Big Tech to take social responsibility for regulating platforms such as Facebook in order to maintain the political integrity of the United States in light of alleged Russian disinformation campaigns. [9]
Andersen Gu
Roundtable Contributor
Section 2: Ongoing Controversy Surrounding Section 230
While Section 230 was created to satisfy opposition to the CDA, the amendment has not been immune to criticism of its own. Section 230’s most notable political consequence was the passage of the FOSTA-SESTA bill package in 2018, which created exemptions to the initial prohibition against treating interactive computer services as publishers or speakers of third party content. [10] In an effort to reduce sex trafficking, the bill package specified that Section 230 does not apply to online platforms with civil and criminal charges of sex trafficking or that promote and facilitate prostitution. As a result, sex workers have been exiled offline, which some suspect has made their work far less safe. An article by the San Francisco Chronicle highlighted the tripling of prostitution-related crime in the city following the shutdown of Backpage.com known for its sex-for-sale advertisements. [11] Further, concerns over narrowed interpretation of Section 230 have arisen because of the potential for political censorship under the veil of sex trafficking prevention. [12] For better or worse, FOSTA-SESTA ultimately narrowed the protections of Section 230 and its First Amendment roots.
The problem, however, extends beyond Backpage.com. Each political party also has its own concerns with respect to Section 230. Some Republicans fear that Section 230 is being used to push an “anti-conservative” agenda. Last year, for instance, Senator Josh Hawley of Arkansas introduced a bill that would rid big social-media sites, like Facebook and Twitter, of Section 230 immunity unless they could prove that they had not moderated their platforms in a politically biased way [13]. On the opposite side of the aisle, Speaker of the House Nancy Pelosi has criticized Section 230 for allowing tech companies to avoid responsibility for hate speech and misinformation. In a 2019 interview with Vox, she emphasized that revoking their civil liability immunity is “not out of the question.” [14]
More recently, in August of 2019, President Trump drafted an executive order that would allow the executive branch to regulate social media moderation under Federal Communications Commission (FCC) guidelines. The order was in response to over 15,000 anecdotal complaints of social media platforms censoring American political discourse. Under Trump’s order, the FCC would be asked to eliminate immunity for social media sites that remove or suppress content for unfair or deceptive reasons, or without notifying the user who posted the material. [15]
Jeff Kosseff, a cybersecurity professor at the United States Naval Academy, seeks to uphold the good-faith provision of Section 230 that allows platforms to self-manage the hate speech of users. He argues that “the whole point is to provide platforms with the certainty that they can adopt the moderation practices that consumers believe necessary without being exposed to liability.” But Section 230 has undoubtedly made it more difficult to punish ‘bad actors’, a term that free speech scholar Geoffrey Stone defines as those online service providers which “deliberately leave up unambiguously unlawful content that clearly creates a serious harm to others.” By immunizing websites that platform content as malicious as revenge pornography, Section 230 shields sites from much-needed legal liability. As a solution, Danielle Citron, a law professor at Boston University, has proposed “to condition the immunity on reasonable content moderation practices rather than the free pass that exists today.” She states that when the courts consider a motion to dismiss on Section 230 grounds, the question should not be whether the service acted reasonably with regards to its specific use, but whether it was equipped with the processes to deal with the cases of misuse presented. [16]
Notably, the tech industry’s recent debacles involving the spread of disinformation during the 2016 election, along with a slew of major privacy leaks, have contributed to an atmosphere of dissolving public support. And while Section 230 has come under fire for discriminating against certain political viewpoints, citizens continue to stress the importance of its livelihood in keeping afloat individual speech rights. Albeit a difficult topic to navigate without infringing personal freedoms, it is imperative that, as our day-to-day life becomes more digitized, governments formally address Section 230 soon.
Annarosa Zampaglione
Roundtable Contributor
Section 3: The Future of Section 230: Implications of Revocation
Since its enactment in 1996, Section 230 has been the bedrock for protecting big tech companies from user content that is transmitted and regulated on their online platform. Over the years, the clause has protected companies such as Facebook, YouTube, and Twitter from being held liable for their users’ online content and third-party content. [17]
The provisions of Section 230 intended to shield internet service providers from publisher liability by allowing them to manage their platform with full discretion. [18] Recent challenges to Section 230 threaten its broad protections, carve out exceptions to publisher liability, and open the door for much larger issues concerning online security. Pushback against Section 230, especially from the Trump Administration and top lawmakers, could spell big trouble for tech companies in the near future.
Earlier this year, in a bipartisan effort to hold tech companies more accountable, Republican Lindsey Graham, Democrat Richard Blumenthal and other legislators introduced a Section 230 reform bill known as the “Eliminating Abusive and Rampant Neglect of Interactive Technologies” Act, or EARN IT. In line with the enumerations of FOSTA-SESTA, the EARN IT Act seeks to curtail the immunities provided by Section 230 to prevent the use of online platforms being used for the exploitation of minors. [19] The bill calls for internet service providers to “earn” their immunities. In effect, it exposes ISPs to liability unless they maintain a “best” practices certification on file with the Department of Justice.
If signed into law, the EARN IT Act would reignite court battles surrounding the responsibilities of big tech companies in monitoring their platforms. The bill also calls for revisions to Section 230 that set a new cause for consequences, “by substituting ‘recklessly’ for ‘knowingly’ each place that term appears” in the amendment. [20] Making ISPs liable for “recklessly” distributing material that pertains to the exploitation of minors curtails the freedom of ISPs to regulate their own platforms. The merit of a redefinition that imposes stricter moderation regulations on internet service providers ultimately depends on one’s faith in their ability to self-regulate in good conscience.
On May 28th, President Trump issued an executive order directed at social media companies, aiming to “clarify” the scope and implications of Section 230. [21] The executive order follows the Trump Administration’s recent dispute with Twitter after the company fact-checked one of the president’s Twitter threads and marked another tweet for violating its rule against “glorifying violence.” [22] Although Twitter preserved the President’s thread disparaging mail-in voting, its decision to attach facts-based reporting for additional context was bold and unprecedented.
In reaction to Twitter’s moderation of potentially misleading and disputable content, President Trump’s executive order calls for an assessment of social media companies that engage in what the administration considers “selective censorship.” [23] In reference to Section 230 (c)(1) and (c)(2), which distinguish ISPs from publishers, the executive order seeks to withhold the liability protections afforded to non-publishers from companies that engage in selective censorship. The order states that when companies engage in censoring “opinions with which they disagree… they cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators.” [24] Under Section 2 of the executive order, the Trump Administration seeks to limit the scope of immunity provided in the context of content censorship. In its legislative recommendations to Congress, the order specifically calls for companies that engage in “deceptive or pretextual actions” to be excluded from Section 230 protections. [25]
Given that President Trump’s executive order itself produces no change in the law, its proposals to curtail Section 230 and its policy recommendations to federal agencies, are unlikely to gain much attention from the courts. Nonetheless, the order raises important questions about the increasing power and influence of social media companies. At the heart of this debate is whether popular social media sites are the functional equivalent of a town square, where First Amendment rights apply, or whether they are more like a private house, where speech is not constitutionally bound.
While supporters of Section 230 seek to preserve an robust interchange of online ideas, critics are demanding more neutrality from ISPs. Platforms like Facebook may deny being an “arbiter of truth,” but it has become increasingly clear that their decisions carry enormous weight in influencing public opinion and the spread of information. [26] At the same time, the language of Section 230 makes no distinction between large operators and small, which means that a change in the law will affect everyone. Thus, even if no complete overhaul of Section 230 is in sight, modified moderation rules can still heavily influence whose voices are heard online.
Ryan Milkman
Roundtable Contributor
Bibliography
[1] Doe v. Backpage.com LLC, No. 15-1724 (1st Cir. 2016).
[2] Smith v. California, No. 361 U.S. 147 (1959)
[3] New York Times Co. v. Sullivan, 376 U.S. 254 (1964)
[4] Kenneth M. Zeran v. America Online, Inc. 129 F.3d 327 (4th Cir. 1997)
[5] New York Times Co. v. Sullivan, 376 U.S. 254 (1964)
[6] Cubby v. CompuServe Inc No. 90 Civ. 6571 (United States District Court, S.D. New York. 1991)
[7] Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710. 1995)
[8] Perfect 10, Inc. v. CCBill LLC, 488 F.3d 1102 (9th Cir. 2007)
[9] Hwang, Tim. "Dealing with Disinformation: Evaluating the Case for CDA 230 Amendment." [Online] Available at: https://ssrn.com/abstract=3089442 (2017).
[10] “H.R.1865 - Allow States and Victims to Fight Online Sex Trafficking Act of 2017.” Congress.gov, April 11, 2017. https://www.congress.gov/bill/115th-congress/house-bill/1865
[11] Andresen, Ted., Ravani, Sarah., and Cassidy, Megan. “The Scanner: Sex workers returned to SF streets after Backpage.com shut down.” San Francisco Chronicle, Oct. 15, 2018. https://www.sfchronicle.com/crime/article/The-Scanner-Sex-workers-returned-to-SF-streets-13304257.php?psid=13FKf
[12] Woodhull Freedom Foundation v. The United States of America, 1:18-cv-01552 (2018) https://assets.documentcloud.org/documents/4567280/Woodhull-Freedom-Foundation-v-United-States-Filed.pdf
[13] 116. U.S. Congress, Senate, To amend the Communications Decency Act to encourage providers of interactive computer services to provide content moderation that is politically neutral. (Ending Support for 5 Internet Censorship Act). https://www.hawley.senate.gov/sites/default/files/2019-06/Ending-Support-Internet-Censorship-Act-Bill-Text.pdf
[14] Johnson, Eric. “Nancy Pelosi says Trump’s tweets “cheapened the presidency” — and the media encourages him.” Vox, April 12, 2019. https://www.vox.com/2019/4/12/18307957/nancy-pelosi-donald-trump-twitter-tweet-cheap-freak-presidency-kara-swisher-decode-podcast-interview
[15] Mcgill, Margaret Harding., and Lippman, Daniel. “White House drafting executive order to tackle Silicon Valley’s alleged anti-conservative bias.” Politico, Aug. 7th, 2019. https://www.politico.com/story/2019/08/07/white-house-tech-censorship-1639051
[16] Chen, Angela. “What is Section 230 and why does Donald Trump want to change it?” MIT Technology Review. Aug 13, 2019. https://www.technologyreview.com/s/614141/section-230-law-moderation-social-media-c ntent-bias/
[17] Force v. Facebook, Inc., No. 18-397 (2nd Cir. 2018)
[18] Heim, J., Stein, B., and Hoosier, M. (2020). Are Cuts Coming to the “Twenty-Six Words That Created the Internet?” [online] Available at: https://www.lexology.com/library/detail.aspx?g=820ad871-9415-4ad2-ba71-f3b6bb568921 [Accessed 7 Jun. 2020].
[19] Goldman, E. (2020). The “EARN IT” Act Is Another Terrible Proposal to “Reform” Section 230. [online] Available at https://blog.ericgoldman.org/archives/2020/02/the-earn-it-act-is-another-terrible-proposal-to-reform-section-230.htm [Accessed 7 Jun. 2020].
[20] U.S. Congress, Senate, EARN IT Act of 2020, S. 3398, 116th Cong., 2nd sess., introduced in Senate March 5, 2020. https://www.congress.gov/bill/116th-congress/senate-bill/3398/text
[21] “Executive Order 13925 of May 28th, 2020, Preventing Online Censorship.” Code of Federal Regulations, (2020): 34079-34083. https://www.govinfo.gov/app/details/FR-2020-06-02/2020-12030
[22] Pham, S. (2020). Twitter says it labels tweets to provide 'context, not fact-checking'. [online] Available at: https://www.cnn.com/2020/06/03/tech/twitter-enforcement-policy/index.html [Accessed 7 Jun. 2020].
[23] “Executive Order 13925 of May 28th, 2020, Preventing Online Censorship.” Code of Federal Regulations, (2020): 34079. https://www.govinfo.gov/app/details/FR-2020-06-02/2020-12030
[24] Ibid.
[25] Ibid, 34080.
[26] Rodriguez, S. (2020). Mark Zuckerberg says social networks should not be fact-checking political speech. [Online] Available at: https://www.cnbc.com/2020/05/28/zuckerberg-facebook-twitter-should-not-fact-check-political-speech.html [Accessed 7 Jun. 2020].