by Amanda Frame*

Section 230 is sometimes referred to as “the internet’s most important law.” The statute, which amends the Communications Decency Act, generally provides immunity for website platforms from liability arising from the content produced by third parties. In doing so, Section 230 eliminates many ordinary legal responsibilities assumed by traditional publishers of third-party content. Protections initially meant to facilitate the growth of the internet now shield powerful social media corporations and incentivize risky and permissive content moderation rules. A proposed shift in the statutory interpretation of “development” would make it easier for plaintiffs to bring facially valid claims against big tech and would allow finders of fact to properly determine when the existing “good faith” requirements have been violated.

Section 230 immunizes providers of interactive computer services, including social media websites, from civil liability for claims arising from the actions or speech of third parties,1 and from claims arising from their own good-faith attempts to filter or moderate third-party content.2 The provision was enacted in 1996 to preserve the ideal of a vibrant and competitive free market of ideas on the Internet and other interactive computer services, unfettered by Federal or State regulation.3

The impetus for the addition of Section 230 to the Communications Decency Act was the 1995 New York case, Stratton Oakmont, Inc. v. Prodigy Services Co., in which an internet service provider was held liable for a defamatory message posted to one of its message boards.4 As the publisher of the message board, the provider had a practice of screening and editing posts. This active role of engaging in the publication process was the source of liability. Providers were then faced with a tradeoff: a provider that acted like a publisher and voluntarily filtered content to improve the quality of the website would be potentially held responsible for all content on the website, including that which was entirely authored by a third party. Meanwhile, “providers that bur[ied] their heads in the sand and ignore[d] problematic posts altogether escape[d] liability.”5 In response, Congress enacted Section 230 as an amendment to the Communications Decency Act to remove this dilemma.

Under the current statutory regime, internet service providers are explicitly not to be treated like publishers, who normally face vicarious liability for the speech of the authors that they publish.6 This provision gives website owners the ability to host and moderate third-party content without worrying about legal liability, a grant of immunity that was essential to the development of social media, and to the explosive growth of the internet. Section 230’s grant of immunity is broad. It protects internet service providers from most civil suits, because even though they provide a content platform, they are not themselves creating unlawful content or otherwise contributing to its unlawfulness. This interpretation of the statute is atextual7 and fails to consider the role that service providers’ engagement-based algorithms play in promoting, contextualizing, and persuading readers for or against a given viewpoint. A modern approach to the law governing civil immunities for websites would increase the scope of internet service provider activities (such as engagement-based algorithms) that “develop” unlawful content, while preserving existing guardrails against liability.

A defendant cannot be held liable for a civil claim concerning another author’s content published online unless they assisted in the development of the unlawful content.8 A defendant is entitled to immunity under Section 230 if they show “(1) [they are] a provider or user of an interactive computer service, (2) the claim is based on information provided by another information content provider and (3) the claim would treat the defendant as the publisher or speaker of that information.”9 Section 230 describes an information content provider as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”10

Section 230 has been interpreted broadly to extend immunity to “any cause of action that would make service providers liable for information originating with a third-party user of the service.”11 Further, courts have interpreted the broad grant of immunity in § 230(c)(1) as compelling them to decide close cases in favor of immunity.12 This, of course, does not give users carte blanche to commit all torts without liability; the statute contains carveouts for intellectual property, communications privacy law, sex trafficking, and state laws consistent with the section.13 Nor does Section 230 make immune an interactive computer service that is itself acting as an “information content provider” of published content giving rise to liability.14 Likewise, the statute allows for the enforcement of federal crimes for obscenity, stalking, and harassment,15 though it preempts any state law inconsistent with the statute’s protections.16


An internet service provider, like a social media company, will only be held liable for unlawful content if they created or developed it.17 However, there are two competing theories for what constitutes “development.” The most widespread of these is the “material contributions” test. Under this test, defendants are responsible for the development of information when they engage in an act beyond the normal functions of a publisher that changes the meaning and purpose of the content.18 Defining “development” in this way keeps intact the settled rule that the Communications Decency Act immunizes a webhost who exercises a publisher’s “traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content.”19 A material contribution is not “merely taking action that is necessary to the display of allegedly illegal content[,] . . . it means being responsible for what makes the displayed content allegedly unlawful.”20

The Ninth Circuit has signaled openness to the theory that a website may incur liability for some types of assistance or encouragement of unlawful conduct.21 In Fair Housing Council of San Fernando Valley v., LLC, the website was held partially liable for the design of their website search and email system, which limited housing listings available to subscribers based on protected characteristics. The court found that developed and enforced a system that elicited illegal content (i.e. the input of discriminatory roommate criteria) and thus subjected its users to discriminatory housing practices.22 This “encouragement test” is a disfavored approach, as it attempts to broadly define “development” as any actions that encourage third parties to engage in unlawful conduct.23

Even so, neither the “material contributions test” nor the “encouragement test” is sufficient to encapsulate the role of massive internet service providers in amplifying, displaying, and contextualizing inflammatory and violent speech. Both tests incorporate the technological exception for “neutral assistance.”24

Courts typically do not treat technologies deployed to help users aggregate, organize, or filter content on their own as content development or creation.25 Hashtags or trigger warnings are usually treated as categorization tools, not additional commentary. Because Congress designed Section 230 to empower users to be the master of their own internet experience,26 the immunity provisions incentivize interactive service providers to develop blocking and filtering technologies, and to maximize user control over the information they receive. The act of categorizing information, without more, “does not transform [a defendant] into a developer of the underlying misinformation.”27 Providing “neutral assistance” in the form of tools or technical means for filtering content is protected,28 as long as such tools are created by the website and are made equally available to bad actors and the intended users.29


There are significant risks caused by Section 230’s broad barrier to suits against powerful actors in Big Tech. Already powerful companies are further empowered to ignore harms caused by their algorithms. Several notable examples from the past few years have shed light on the potential ways in which algorithms that prioritize engagement actively collect and promote hate, with devastating effects offline.

Amnesty International found in a report that Meta’s systems promoted violence against Myanmar’s Rohingya Muslim population in 2017.30 Compounding Facebook’s failure to remove anti-Rohingya inflammatory posts was the prominent use of engagement-based algorithms that amplified hate speech and created an “anti-Rohingya echo chamber” in the months and years prior to the 2017 genocide.

Similarly, in Gonzalez v. Google, which was argued before the Supreme Court in February of 2023, plaintiffs, who are surviving family members of victims of acts of terrorism abroad, alleged that Google, Twitter, and Facebook’s algorithms matched and suggested content to users based on viewing history in a way that facilitated social networking among ISIS members and assisted ISIS in spreading its message.31 These claims alleged that the social media organizations’ lax moderation policies allowed ISIS to actively recruit new members, plan attacks, and issue threats. The court dismissed the claims of direct and secondary liability.

Lawmakers have introduced several regulatory and legislative reforms in recent years. One such proposal, the Safeguarding Against Fraud, Exploitation, Threats, Extremisms, and Consumer Harms Act, (the “SAFE TECH Act”),32 would limit Section 230 immunity for service providers to claims arising from third-party speech, rather than third-party information. This change may limit the scope of protection to leave out any forms of neutral assistance that service providers attach to third-party content and would likely leave algorithms completely vulnerable to liability. Another proposal, the Protecting Americans from Dangerous Algorithms Act,33 would explicitly make internet service providers liable for all but the most transparent algorithmic recommendations.

However, the courts are well positioned to accommodate concerns over algorithmic promotion and abdication of moderation responsibilities without completely blowing up the current regime or creating administratively burdensome conditions or carveouts. Courts can sidestep Section 230’s broad barrier to suits without legislative intervention by changing their approach to statutory interpretation. By reviving and revising the “encouragement” test to account for the ways in which engagement-based algorithms facilitate networking amongst bad actors and promote potentially unlawful speech at a level of intensity that arguably carries its own message, courts can broaden the scope of what constitutes “development.” A broader understanding of “development” would promote liability for social media companies’ negligent use of algorithms. However, adoption of the “encouragement test” would not leave social media companies unjustly exposed to liability; a social media company would still only be found liable if their efforts were ultimately found by a trier of fact to be in bad faith, as the statute requires.

This approach is consistent with the theory of liability.34 There, the Ninth Circuit held that user experience design choices that contribute to making content unlawful are subject to liability. Even so, the use of algorithms goes far beyond the scope of the neutral assistance carveout. These feeds choose to promote certain ideologies to maximize engagement, even if they can be, at times, extreme. Because these companies have abdicated their social responsibility to ensure that information is delivered in a pro-social way, these extreme messages often feed into dangerous contexts where they lead to real world violence. These decisions to promote high-engagement, low-social value messaging regardless of costs should be subject to tort liability, not the current, highly permissive regime. An expansion of the encouragement test is a judicially manageable way to accomplish this goal that honors the letter of Section 230 and properly balances interests in speech with interests in accountability for spreading destructive messages.

* Amanda Frame is a J.D. Candidate (2023) at New York University School of Law. This Contribution is a commentary on the problem at the 2022 William B. Spong Moot Court Tournament hosted by William & Mary Law School. The question presented was whether solicitation of potentially unlawful content for publication, and the embellishment of such content with hashtags and other commentary, negates the immunity provided to publishers of online content by Section 230 of the Communications Decency Act. This Contribution presents a distillation of the argument assigned to the author in the Tournament, and the views expressed herein do not necessarily reflect the views of the author.


1. 47 U.S.C. § 230(c)(1).

2. 47 U.S.C. § 230(c)(2).

3. 47 U.S.C. § 230(b)(2).

4. 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995).

5. Fair Hous. Council of San Fernando Valley v., LLC, 521 F.3d 1157, 1163 (9th Cir. 2008).

6. 47 U.S.C. § 230(c)(1).

7. See Malwarebytes Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13, 14 (2020) (Thomas, J., statement on denial of cert.) (“Courts have long emphasized nontextual arguments when interpreting § 230, leaving questionable precedent in their wake . . . . [I]n an appropriate case, we should consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by Internet platforms.”).

8. 47 U.S.C. § 230(f)(3).

9. F.T.C. v. LeadClick Media, LLC, 838 F.3d 158, 173 (2d Cir. 2016) (cleaned up) (citations and quotations omitted).

10. 47 U.S.C. § 230(f)(3).

11. Almeida v., Inc., 456 F.3d 1316, 1321 (11th Cir. 2006) (quoting Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997)).

12. See, e.g.,, 521 F.3d at 1174; Jones v. Dirty World Entertainment Recordings LLC, 755 F.3d 398, 408 (6th Cir. 2014).

13. 47 U.S.C. § 230(e).

14. F.T.C. v. Accusearch, Inc., 570 F.3d 1187, 1197 (10th Cir. 2009) (quoting 47 U.S.C. § 230(c)(1)).

15. 47 U.S.C. § 230(b)(5).

16. 47 U.S.C. § 230(e)(3).

17. 47 U.S.C. § 230(f)(3).

18. Elliott v. Donegan, 469 F. Supp. 3d 40, 57 (E.D.N.Y. 2020) (citing Fair Hous. Council of San Fernando Valley v., LLC, 521 F.3d 1157, 1163 (9th Cir. 2008)).

19. Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997).

20. Jones v. Dirty World Entertainment Recordings LLC, 755 F.3d 398, 410 (6th Cir. 2014).

21., 521 F.3d at 1157.

22. Id. at 1171–72.

23. See, e.g., Jones, 755 F.3d, at 414 (“[A]n encouragement test would inflate the meaning of ‘development’ to the point of eclipsing the immunity from publisher-liability that Congress established.”).

24. Elliott, 469 F. Supp. 3d at 60.

25. See generally id.

26. 47 U.S.C. § 230(b)(3).

27. Carafano v., Inc., 339 F.3d 1119, 1124 (9th Cir. 2003) (quotations omitted).

28. Herrick v. Grindr, LLC, 306 F. Supp. 3d 579, 586 (S.D.N.Y. 2018).

29., 521 F.3d at 1169.

30. Amnesty Int’l, The Social Atrocity: Meta and the Right to Remedy for the Rohingya, AI Index ASA 16/5933/2022 (Sept. 29, 2022).

31. 2 F.4th 871, 881–83 (9th Cir. 2021), cert. granted, 143 S. Ct. 80 (2022).

32. H.R. 3421, 117th Cong. (2021).

33. H.R. 8636, 116th Cong. (2020).

34. Fair Hous. Council of San Fernando Valley v., LLC, 521 F.3d 1157 (9th Cir. 2008).