by Patience T. Adegboyega*

As technology continues to advance, legislatures are understandably concerned about maintaining the safety of their citizens. Doxing, the act of publicly releasing another’s information online, poses a threat to public safety. However, any legislation targeting doxing will have to be mindful to not infringe on the First Amendment rights of internet providers. In Moody v. NetChoice, LLC., the United States Supreme Court held that editorial discretion—the constitutionally protected right of older forms of media like newspapers to decide what speech they will or will not disseminate—also applies to social media platforms. This Contribution first lays out the framework courts use to analyze First Amendment speech issues, explores the meaning of editorial discretion, and demonstrates that doxing statutes targeting social media platforms may struggle to overcome First Amendment challenges based on editorial discretion. It then proposes a law that could address a platform’s response to doxing without triggering editorial discretion protections. Alternatively, it argues that even if such doxing regulation infringed on editorial discretion, states’ legitimate interests in enacting such legislation sufficiently outweigh the platform’s right to editorial discretion. While this Contribution does not explore it fully, it also recognizes that Section 230 of the Communications Decency Act of 1996 poses an additional barrier for the proposed doxing regulation if it survives constitutional scrutiny.


Advances in technology over the last twenty years have transformed society by revolutionizing communication and access to information, bringing with them “unparalleled opportunities and unprecedented dangers.”1 Social media in particular embodies this dichotomy. The potential of virality is a double-edged sword and social media takes speech to a level far removed from that which existed when the First Amendment was written and ratified. On one hand, social media can be used to uplift urgent messages like mutual aid sourcing and information about staying safe during natural disasters.2 But it can also be used to harass and defame people through doxing. After the 2023–2024 academic year, it is increasingly apparent that doxing is a problem that needs to be addressed immediately.3 However, in light of the Supreme Court’s decision in Moody v. NetChoice, LLC., as well as other key cases in First Amendment jurisprudence, there is a real question of how to do that effectively without running afoul of the First Amendment.

The term doxing, short for “dropping dox,” (aka documents) refers to the practice of “us[ing] the Internet to source and collect someone’s personal and private information and then publicly releasing that information online.”4 One of the first things that the generations that grow up with the Internet learn is that everything there is permanent. Now, people are having to live with their personal information being posted online, with no guarantee that it will be removed, thereby compromising their physical safety indefinitely. Some scholars argue that doxing is a tool and the approach to it should depend on the intent behind the act.5 They argue that there are three types of doxing: “(1) punching down doxing (i.e., doxing for purely malicious purposes);6 (2) doxing for political purposes;7 and (3) the use of doxing by members of anonymous online communities as a tool for internal regulation (i.e., ‘unmasking’).”8

 Many state legislatures have recognized the harm of doxing and have made attempts to prevent it.9 However, the viability of these statutes—especially ones that target the platform’s response to doxing, rather than the act itself—depends on whether the statutes are able to pass two difficult barriers: the First Amendment protections of the platforms regulated and Section 230 of the Communications Decency Act of 1996 (“Section 230”). First, and most important to this Contribution, the choice about what posts to host on a platform might constitute protected speech under the First Amendment. On the other hand, if a platform’s protected speech is not implicated, then it might be shielded by the broad immunity that Section 230 grants internet providers.10 This Contribution focuses on the First Amendment issue and leaves the Section 230 issue for a separate analysis.

The First Amendment prohibits the government from passing laws that “abridg[e] the freedom of speech.”11 There are generally two kinds of laws that implicate the First Amendment. Content-neutral laws regulate speech without regard for the content of the speech or the speaker’s viewpoint.12 For example, a law prohibiting the sale or distribution of printed material in a certain space to prevent littering is a content-neutral law because its prohibition on speech is unrelated to the content of the speech and instead focuses on regulating a certain environment.13 In contrast, content-based laws regulate speech with a focus on the content of the speech and/or the speaker’s viewpoint.14 A law that limits what kinds of posters can be displayed depending on the message they communicate is a content-based law.15

When evaluating the constitutionality of content-neutral laws, courts consider whether the law “furthers an important or substantial governmental interest” and “the incidental restriction on alleged First Amendment freedoms is no greater than is essential to the furtherance of that interest.”16 In other words, a content-neutral restriction on speech could pass muster if it advances a legitimate governmental interest and is narrowly tailored, meaning there is no less speech-restrictive alternative to address the problem the statute is intended to solve.17

On the other hand, with content-based laws, courts begin by assuming the law to be unconstitutional and apply strict scrutiny.18 The government can only overcome the presumption of unconstitutionality by demonstrating that either: (1) the statute is necessary to serve a compelling state interest and there are no less speech-restrictive alternatives;19 or (2) the regulated speech is not protected under the First Amendment because the “social interest in order and morality” outweighs the “slight social value” of the speech.20 Examples of speech not protected by the First Amendment include threats21 and obscenity.22 Therefore, content-based laws that prohibit threats or obscene content do not violate the First Amendment.

The First Amendment’s protection of speech also applies to regulations targeted at media outlets. The Supreme Court has repeatedly recognized that traditional media, such as newspapers and television broadcasts, engage in expressive activity through their exercise of editorial discretion, making their exercise of editorial discretion protected speech.23 Editorial discretion is the selection and presentation of information.24 In deciding “which stations or programs to include in its repertoire,” traditional media “communicate messages on a wide variety of topics and in a wide variety of formats.”25 Courts have struck down regulations that interfere with a publisher’s editorial discretion regardless of whether they are content-based or content-neutral.26

As of last Supreme Court term, social media platforms now also enjoy the protection of editorial discretion. In Moody v. NetChoice, LLC, the Supreme Court extended First Amendment protection to internet providers—to the extent they are exercising editorial discretion.27 This is a departure from the way the Court has previously characterized the function of social media companies.28 After Moody, courts must ascertain when a social media platform is exercising editorial discretion to determine the permissibility of legislation regulating the platform’s activity.

Moody considered the validity of two states’ attempts to regulate content on social media platforms. In 2021, Florida and Texas passed similar statutes aimed at regulating certain social media platforms.29 The Texas law prohibited certain social media platforms from removing, deprioritizing, or blocking content based on a user’s views.30 The Florida law forbade social media platforms from banning political candidates in addition to limiting the distribution and prioritization of posts about them.31 While the statutes covered different entities and activities, they both limited the ability of social media platforms to curate the content they present.

NetChoice, LLC, and the Computer & Communications Industry Association (“NetChoice”), internet trade associations of which Facebook and YouTube are members, brought suit arguing that the statutes, on their face, violated social media platforms’ First Amendment rights.32 They argued that they are editors of their users’ content just like newspapers, and therefore draw First Amendment protection.33 NetChoice argued that the Texas law is content-based discrimination34 and that the Florida law interferes with editorial discretion, compels speech, and discriminates based on viewpoint, content, and speaker.35 Florida and Texas argued that the social media platforms are engaged in conduct, not speech, so the First Amendment does not apply.36

The Supreme Court vacated both circuit courts’ decisions and remanded because neither court properly engaged in a facial analysis, but instead treated the cases as if they were as-applied challenges.37 To guide the lower courts as they properly evaluate the facial challenges against the Florida and Texas statues, the Court emphasized that to the extent that the social media platforms were exercising editorial discretion, they were engaging in expression that receives First Amendment protection.38 The Court described the platforms’ activity as “compiling and curating others’ speech” or “deciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items.”39 Furthermore, it rejected Texas’s and Florida’s proffered rationales for their statutes as insufficient to justify an infringement on expression.40

In characterizing at least some of what social media platforms do as editorial discretion, Moody presents a complex issue for regulating online doxing. State legislatures have already recognized the harm that doxing presents and responded by passing laws that create criminal or civil penalties for doxing (sometimes even using the term doxing in the statute).41 However, these statutes are undeniably constitutionally suspect. For instance, Alabama passed a law that makes it a Class A misdemeanor to intentionally electronically publish, post, or provide the personal identifying information of another individual.42 However, prohibitions of truthful speech tend not be enforceable,43 so it is unclear whether states can criminalize doxing in the face of the First Amendment. Even if these doxing statutes can survive constitutional scrutiny, the events of the 2023–2024 academic year demonstrate that doxing is still a rampant problem, and these statutes might not be the solution. First, determining who to hold accountable for doxing would require being able to trace the post back to a specific person, which could prove difficult. Second, the digital nature of doxing could complicate the process of ascertaining which state has jurisdiction. Therefore, to the extent that states are considering broader means of regulating doxing than targeting individuals who engage in doxing, the extension of First Amendment protections through editorial discretion afforded to internet providers under Moody will be a hurdle.

Doxing laws that target social media platforms’ responses to doxing could potentially circumvent the problem of editorial discretion by framing the regulation to require the platforms to remove doxing posts upon the request of the subject of the doxing. Such a regulation avoids interfering with a platform’s editorial discretion because the very fact that a user is requesting the content’s removal is different from the platform curating its feed. In the original process of curation, the platform is presented with a large quantity of content, and without prompting from a third party, decides which content is included and excluded. Once a user requests that content to be removed, the original exercise of editorial discretion—meaning the act of curating the content—would have already concluded. Therefore, responding to a user’s request to remove content is different from a platform’s decision about what to say or not say in the first place.

Moreover, much of the Court’s cases describing editorial discretion have dealt with what the medium is mandated to post.44 The prevailing argument against mandated messaging is that it forces the medium to adopt a message that it does not want to adopt. Indeed, NetChoice made a similar argument in Moody.45 However, taking down a post based on a user’s request is different from forcing the platform to make an affirmative message or to “print that which it would not otherwise print.”46

Opponents of regulations like the one proposed here could argue, as Justice Kagan did, that the differences between social media and traditional media do not negate the lessons drawn from First Amendment analyses in the context of traditional media.47 They could argue that the continuous nature of a cable or radio broadcast means that editorial discretion is continuous, and thus there is no clear end to the curation process. Such a framing would place the decision of whether to acquiesce to a user’s request back into the realm of editorial discretion. Indeed, the Moody Court recognizes the fact that Facebook, for example, decides which posts to remove, censor, or disclaim. Framing the editorial process as continuous makes it extremely difficult to conclude that the act of taking down a doxing post, regardless of whether it was prompted by a request, would not be part of the platform’s editorial discretion.48

However, even if the proposed regulation were found to interfere with a platform’s editorial discretion, and therefore its protected speech, it may still be constitutional. If an anti-doxing statute is content-specific, the state must either show the speech does not draw First Amendment protection, or that there is no less speech-restrictive alternative to achieving a legitimate governmental interest.49 Showing either may require a regulation more tailored to a particular kind of doxing.

Legislation criminalizing the publication of truthful information is unlikely to be found constitutional because truthful information in the public domain is protected speech.50 Therefore, the Constitution forecloses regulating doxing for political or unmasking purposes. That leaves punching down doxing as the only kind of doxing that does not necessarily or automatically draw First Amendment protection. The First Amendment does not protect illegal speech, or speech essential to a criminal act. As mentioned above, threats are not protected by the First Amendment, and therefore can be made illegal.51 Because punching down doxing is explicitly malicious, it is more likely to include language that is not protected by the First Amendment, such as a true threat or harassment.52

If doxing is considered protected speech, a state could still assert—regardless of the type of doxing it is regulating—that its regulation remains constitutional because there are no less speech-restrictive alternatives to promote the regulation’s legitimate governmental interests. The Court has recognized that maintaining public safety and order is a legitimate state interest.53 In Hill v. Colorado,54 a group challenged a statute that prohibited people from “knowingly approach[ing]” within eight feet of another person without their consent to pass a “leaflet or handbill to, displaying a sign to, or engag[e] in oral protest, education, or counseling with such other person . . .” in the regulated areas.55 Petitioners, sidewalk counselors offering abortion alternatives to pregnant people entering abortion clinics, sought a declaration that the statute was facially unconstitutional and an injunction against its enforcement.56 The Court recognized protecting the health and safety of citizens as a legitimate state interest, and upheld the statute because protecting the public from confrontational and harassing speech advanced that interest.57 That the statute upheld in Hill was content-neutral does not change the validity of protecting the health and safety of the public as a legitimate governmental interest.58

Just as preventing the public from confrontational and harassing interactions in Hill was a legitimate governmental interest, the state would have a legitimate interest in maintaining the health and safety of its citizens by preventing, mitigating, or reducing instances of doxing. Doxing can have many adverse effects. Victims of doxing, particularly punching down doxing, often experience mental health impacts, threats to physical safety, financial losses, and damage to professional reputations.59 The state’s interest in preventing such harm should outweigh the editorial discretion protections afforded to social media platforms.

Finally, while this analysis has focused on the constitutionality of doxing regulations, it is important to note that Section 230 also presents very real challenges to doxing statutes. Congress passed Section 230 to “promote the continued development of the Internet and other interactive computer services and other interactive media.”60 It prevents internet service providers from being held liable for “claims based on information posted by a third-party.”61 An investigation into if and how a statute regulating doxing would operate considering Section 230 remains necessary.

With each new advancement in technology comes new problems to tackle. The phenomenon of doxing presents unprecedented challenges. In an ideal world, social media platforms would be able to address it. However, it will realistically require a joint effort between legislatures and platforms to protect the general public from the dangers of doxing.


* Patience Adegboyega is a J.D. Candidate (2025) at New York University School of Law. This Contribution is commentary on a topic the author became interested in as a result of a personal experience with doxing related to the Israel-Gaza conflict, as many students experienced in the fall semester of the 2023–2024.

1. Moody v. NetChoice, LLC, 144 S. Ct. 2383, 2393 (2024).

2. See, e.g., Evangelos Ntontis et al., Tracking the nature and trajectory of social support in Facebook mutual aid groups during the COVID-19 pandemic, 76 Int’l J. of Disaster Risk Reduction 103043 (2022) (noting that during the COVID-19 pandemic, mutual aid groups operated on social media platforms as avenues for people to request support); Emergency Communications: the Power of Social Media, United Nations Dev. Programme (Feb. 20, 2024) https://www.undp.org/belarus/stories/emergency-communications-power-social-media (noting that social media provides a mechanism for emergency services to send out alerts and safety tips and also receive updates from witnesses of emergency events).

3. See Tesfaye Negussie & Nadine El-Bawab, Doxxing campaign against pro-Palestinian college students ramps up, ABC News (Oct. 20, 2023, 6:06AM), https://abcnews.go.com/International/doxxing-campaign-pro-palestinian-college-students-ramps/story?id=104141630.

4. Beth Brindle, What is Doxxing? 5 Ways to Protect Yourself, howstuffworks (May 9, 2024), https://bit.ly/2RAb86E.

5. E.g., Julia M. MacAllister, The Doxing Dilemma: Seeking a Remedy for the Malicious Publication of Personal Information, 85 Fordham L. Rev. 2451, 2457 (2017).

6. An example of punching-down doxing would be an ex-boyfriend posting his ex-girlfriend’s contact information online and soliciting sex while impersonating her in online chat rooms. See id. at 2458.

7. An example of doxing for political purposes would be publishing sensitive military documents to expose the interconnectedness of the wars in Afghanistan and Iraq. See id. at 2460–61. This type of doxing typically targets the privacy of a government rather than an individual.

8. An example of doxing to unmask would be revealing the identity of a member of an anonymous group to other group members to expose that person for railroading the group. See id. at 2461–62.

9. David L. Hudson, Jr., Is doxing illegal?, The Found. for Individual Rts. and Expression (Feb. 28, 2024), https://www.thefire.org/research-learn/doxxing-free-speech-and-first-amendment; see also infra note 39.

10. Kimzey v. Yelp! Inc., 836 F.3d 1263, 1269 (9th Cir. 2016) (holding that websites lose their Section 230 immunity if they create or develop content “by making a material contribution to the creation or development of content”). Notably, the description of the type of behavior that causes websites to lose Section 230 immunity is very similar to how the Supreme Court describes editorial discretion). See infra note 21.

11. U.S. Const. amend. I.

12. Heffron v. Int’l Soc’y for Krishna Consciousness, 452 U.S. 640, 648 (1981) (quoting Pharmacy Bd. v. Va. Citizens Consumer Council, 425 U.S. 748, 771 (1976)).

13. Id. at 643.

14. Reed v. Town of Gilbert, 576 U.S. 155, 171 (2015).

15. Id. at 164.

16. United States v. O’Brien, 391 U.S. 367, 377 (1968).

17. Id.

18. Simon & Schuster, Inc. v. N.Y. State Crime Victims Bd., 502 U.S. 105, 115 (1991).

19. Id. at 123 (finding that at least one of the state’s interests in enacting the speech restrictive law was compelling but not narrowly tailored).

20. R. A. V. v. St. Paul, 505 U.S. 377, 383 (1992).

21. See id. at 388 (holding that “threats of violence are outside of First Amendment” because the purpose of the First Amendment is to protect “individuals from the fear of violence, from the disruption that fear engenders, and from the possibility that the threatened violence will occur”).

22. Miller v. California, 413 U.S. 15, 23 (1973) (“This much has been categorically settled by the Court, that obscene material is unprotected by the First Amendment.”).

23. Mia. Herald Pub. Co. v. Tornillo, 418 U.S. 241, 243 (1974) (striking down a statute that required newspapers to publish the response of a political candidate to criticism and attacks in the newspaper); see also Los Angeles v. Preferred Comm’ns, Inc., 476 U.S. 488, 494 (1986) (finding that a cable company engages in expressive speech by creating original programming or by exercising editorial discretion over which stations and programs to broadcast).

24. Ark. Educ. Television Comm’n v. Forbes, 523 U.S. 666, 674 (1998).

25. Preferred Comm’ns, 476 U.S. at 494.

26. Compare Reed, 576 U. S. at 171 (striking down a law restricting the posting of certain signs depending on their message because it was content-based and didn’t survive strict scrutiny) with Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622, 643 (1994) (Turner I) (finding the Cable Act of 1992, which required cable operators to carry certain programmers, to be a violation of the cable operators’ First Amendment rights even though the Act was content-neutral).

27. Moody, 144 S. Ct. at 2393 (2024).

28. See Twitter, Inc. v. Taamneh, 598 U.S. 471, 499 (2023) (describing Twitter’s sorting and presenting of user content as a passive mechanism for organizing large amounts of third-party content).

29. See Fla. Stat. § 501.2041(1)(g) (2023); Tex. Bus. & Com. Code Ann. §§120.001(1), 120.002(b) (West 2023).

30. Tex. Bus. & Com. Code Ann. §§120.001(1), 120.002(b).

31. Fla. Stat. § 501.2041(1)(g).

32. NetChoice, LLC v. Moody, 546 F. Supp. 3d 1082, 1084–85 (N.D. Fla. 2021); NetChoice, LLC v. Paxton, 573 F. Supp. 3d 1092, 1101 (W.D. Tex. 2021).

33. Brief for Petitioners at 23, Moody v. NetChoice, LLC, 144 S. Ct. 2383 (2024) (No. 22-277).

34. Id. at 107.

35. Id. at 62–63.

36. Id. at 5; Transcript of Oral Argument at 52, NetChoice, LLC v. Paxton, 144 S. Ct. 2383 (2024) (No. 22-555).

37. Moody, 144 S. Ct. at 2394.

38. Id. at 2401–02.

39. Id. at 2041–42.

40. Id. at 2402 (rejecting the argument that the government can overcome First Amendment protections by asserting an interest in improving the marketplace of ideas).

41. See, e.g., Ariz. Rev. Stat. § 13-2916 (making it punishable by up to six months imprisonment and/or a fine of up to $2,500 for causing unwanted physical contact or injury by a third party by publishing a person’s information online); Ann. Cal. Penal Code § 18-9-313 (making it punishable by up to one year imprisonment and/or a fine of up to $1,000 for publishing a person’s identifying information online without their consent with the intention to place that person or their immediate family in fear of their safety); Or. Rev. Stat. § Ch. 300, § 1 (creating a civil cause of action for knowingly causing someone’s identifying information to be disclosed with the intention to stalk, harass, or injure that person).

42. Ala. Code § 13A-11-38.

43. See infra note 50.

44. See, e.g., Tornillo, 418 U.S. at 243–44 (overturning a statute that forced newspapers to publish a political candidate’s response to criticism made in the newspaper).

45. Brief for Petitioner at 24, Moody v. NetChoice, LLC, 144 S. Ct. 2383 (2024) (No. 22-277).

46. Tornillo, 418 U.S. at 256.

47. Moody, 144 S. Ct. at 2403.

48. Id.

49. See supra text accompanying notes 16–18.

50. E.g., Cox Broad. Corp. v. Cohn, 420 U.S. 469, 496 (1975) (a state cannot sanction the publishing of the name of a rape victim that was already revealed publicly in court records); Okla. Pub. Co. v. District Court of Okla., 430 U.S. 308, 311 (1977) (a court cannot restrict the dissemination of truthful information that was made public). But see Smith v. Daily Mail Pub. Co., 443 U.S. 97, 103 (1979) (a state can punish the publishing of truthful information if they articulate an interest of the “highest order”).

51. See supra text accompanying note 19.

52. Cohn, 420 U.S. at 496; Okla. Pub. Co., 430 U.S. at 311.

53. Schenck v. Pro-Choice Network of W.N.Y., 519 U.S. 357, 376 (1997).

54. 530 U.S 703, 707 (2000).

55. Colo. Rev. Stat. § 18-9-122(3) (1999).

56. Hill, 530 U.S at 708.

57. Id. at 715.

58. Id. at 719–20.

59. Max Sheridan, Doxxing Statistics in 2024: 11 Million Americans Have Been Victimized, SafeHome.org (Aug. 8, 2024), https://www.safehome.org/family-safety/doxxing-online-harassment-research/.

60. 47 U.S.C. § 230(b)(1).

61. Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997).