by Matthew Uvas*

Certain classes of speech are deemed to be so dangerous that they fall outside of the protections of the First Amendment. Regulation of one such class, incitement, seeks to prevent speech which would encourage lawless and violent action. The modern test for whether speech qualifies as incitement hinges upon whether the speech is likely to produce imminent lawless action. However, when hateful or violent speech is spread online, there may be a delay from when a post is made to when someone sees it and responds violently. Therefore, in these cases, imminence may not be an appropriate measure for identifying incitement language online. This Contribution argues that history, case law, and other First Amendment jurisprudence suggests shifting focus to context rather than imminence when regulating online incitement.


The freedom of speech protected by the First Amendment of the United States Constitution has long been understood, by both the Supreme Court and the public, to be one of the “fundamental personal rights and liberties which are protected by the Fourteenth Amendment from invasion by state action.”1 However, the Supreme Court has also recognized that the First Amendment does not protect all speech under all circumstances; there are limited classes of speech that the government may regulate and punish without raising constitutional concerns.2 Thus, state governments must balance competing interests: they must interpret the Fourteenth Amendment’s restrictions on government action as broadly as possible, but may limit those classes of speech which “are of such slight social value as a step to truth that any benefit that may be derived from them is clearly outweighed by the social interest in order and morality.”3

One such class of speech is incitement, defined as “[t]he act of persuading another person to commit a crime.”4 This class is particularly important to the regulation of hate speech, given the propensity of hate speech to incite and cause violence.5 The modern test for incitement was established in Brandenburg v. Ohio where the Supreme Court held that “advocacy of the use of force or of law violation” can only be regulated if that advocacy is “directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”6 In Brandenburg, the Court invalidated an Ohio statute that punished people for, among other things, “advocat[ing] or teach[ing] the duty, necessity, or propriety of violence as a means of accomplishing industrial or political reform.”7

Brandenburg’s limitation on incitement regulation—and its imminence requirement in particular—poses significant challenges for regulating dangerous speech on the internet.8 In the age of social media, it is increasingly possible for someone to spread hateful or offensive speech and cause violence in another area of the world.9 However, given that it is nearly impossible to determine if and when a listener will react to a post on the internet, internet posts—despite the likelihood or foreseeability of violence—would constitute protected speech under Brandenburg’s conception of incitement.10

This Contribution argues that an overlooked aspect of incitement jurisprudence is the distinction between the abstract advocacy of illegality present in Brandenburg and more concrete proposals to engage in illegal activity.11 Online posts and hate speech that lead to violence are more similar to concrete proposals than abstract advocacy, therefore the imminence requirement should be modified by courts analyzing purported incitement in this context.12 The history of incitement regulation, circuit court holdings, and the regulation of other classes of unprotected speech (like true threats) suggests that a contextual approach to judicial incitement analysis, which would consider elements such as prior history of violence or composition of the target audience, may be better suited to the internet context than viewing hateful or incendiary internet posts through the lens of imminence.13

Before Brandenburg, the Court held that speech could be regulated by the government where there was “a clear and present danger that a substantial public evil will result therefrom.”14 The Court in Dennis v. United States, upholding a conviction for a conspiracy to overthrow the government, rejected the notion that success or probability of success was the criterion for whether a clear and present danger existed, stating that “[t]he mere fact that . . . petitioners’ activities did not result in an attempt to overthrow the Government by force and violence is of course no answer to the fact that there was a group that was ready to make the attempt.”15 Rather, the Court held that where there existed “the ingredients of the reaction”—the formation of a conspiracy, members of a group willing to overthrow the government, and “the inflammable nature of world conditions”—the government need not wait until “the catalyst is added” to constitutionally restrain the conspiracy.16 Yet, the Court later held in Brandenburg that “advocat[ing] or teach[ing] the duty, necessity, or propriety of violence as a means of accomplishing industrial or political reform” could not be regulated unless it is “directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”17

Importantly, the Court in Brandenburg held that “mere abstract teaching,” which it protected from regulation, “is not the same as preparing a group for violent action and steeling it to such action.”18 This distinction implies that speech prompting illegal action more specific than abstract advocacy does not fall under the First Amendment’s protection.19

More importantly for the internet context, this Brandenburg distinction also indicates that the requirements for imminent lawless action and a likelihood to produce such action is limited to the context of abstract advocacy.20 In Rice v. Paladin Enterprises, Inc., the Fourth Circuit followed this guidance and held that a book containing concrete instructions on how to kill a man was not protected by the First Amendment because it can prepare one for lawless action without resembling advocacy.21 This approach essentially recognizes “a second category of unprotected speech inherent in that of incitement, which may be proscribed without regard to whether it’s directed and likely to produce imminent lawlessness.”22

Therefore, rather than focusing on the Brandenburg imminence requirement, the more appropriate standard by which to evaluate potentially dangerous online posts is this second category of incitement. In an era where social movements are largely furthered online and speech is consumed on demand, a focus on imminence is ill-suited to address potential harm from online posts.23 Incitement on the internet is just as dangerous as incitement in person, if not more dangerous given the potential audience size; the lag time between the posting and a violent reaction should be of little relevance.24 A more appropriate analysis would focus on the context of the speech to determine if the speech constitutes a concrete proposal of unlawful action and would return to the inquiry in Dennis of whether there exist “ingredients of the reaction” which would present a danger.25

Courts already use this type of analysis when it comes to true threats, a separate class of unprotected speech.26 The regulation of true threats, unlike incitement, is meant to protect individuals, rather than society at large, from “the fear of violence and from the disruption that fear engenders” as well as “from the possibility that the threatened violence will occur.”27 While circuit courts are split on the appropriate test to use,28 the context and effect of a potential threat on the listener are considered under both standards.29 For example, in Planned Parenthood of the Columbia/Willamette, Inc. v. American Coalition of Life Activists, the Ninth Circuit held that—given a history of killings associated with posting “wanted” posters of pro-choice advocates—the posting of similar “wanted” posters amounted to true threats even if they began as political statements.30

While the posters in Planned Parenthood were aimed at specific individuals, the analysis could be helpful when regulating internet hate speech in analogous circumstances. Suppose that the “wanted” posters were internet images that did not target specific individuals but pro-choice advocates as a group. Under Brandenburg, the speech could not be regulated: on its face, the posters seem to be mere advocacy of violence for political change, and there is no proof of imminent violence.31 However, analysis of context (in this case, the killings that followed from essentially identical posters) could reveal that the post was not abstract advocacy but rather a specific proposal of lawless action, more similar to the hitman instruction book in Rice.32 As the true threats analysis recognizes, courts should aim to accurately assess the offending online speech’s purpose by looking to the context in which it was employed, rather than turning to often unattainable evidence of imminence. This understanding of incitement provides a legal remedy for people threatened or intimidated by hate speech and overcomes the unnecessary burden of demonstrating imminence when preventing or punishing incitement.33

The prevention of incitement is of tantamount importance, particularly when online hate speech has become more prevalent. The current test provided by the Court in Brandenburg is both misunderstood and ill-suited to a digital world. To prevent the harm associated with incitement, courts should take care to distinguish between abstract advocacy and concrete proposals to violence and, in doing so, should rely on the framework utilized in true threat cases, rather than Brandenburg’s imminence requirement, to properly evaluate the context in which inciting comments are made.


* Matthew Uvas is a J.D. Candidate (2023) at New York University School of Law. This Contribution arose from the problem presented at the 2022 National Telecommunications and Technology Competition hosted by the Catholic University of America Columbus School of Law. The question presented was whether an online social media platform’s “Hateful Conduct Algorithm” was constitutional in its regulation of language and posts on the platform that could have incited lawless action, true threats, or fighting words. This Contribution presents a distillation of arguments from the competition and does not necessarily represent the views of the author.

1. Chaplinsky v. New Hampshire, 315 U.S. 568, at 570­–71 (1942) (quoting Lovell v. City of Griffin, 303 U.S. 444, 450 (1938)).

2. See Chaplinsky, 315 U.S. at 571–72 (“These include the lewd and obscene, the profane, the libelous, and the insulting or ‘fighting’ words – those which by their very utterance inflict injury or tend to incite an immediate breach of the peace.”).

3. Id. at 572.

4. See Incitement, Black’s Law Dictionary (11th ed. 2019); United States v. Miselis, 972 F.3d 518, 532 (4th Cir. 2020).

5. See John C. Knechtle, When to Regulate Hate Speech, 110 Penn St. L. Rev. 539, 546–48 (2006) (illustrating the relationship between hate speech and violence in Bosnia and Rwanda).

6. Brandenburg v. Ohio, 395 U.S. 444, 447 (1969) (emphasis added).

7. Id. at 448 (quotation omitted).

8. See Lyrissa Barnett Lidsky, Incendiary Speech and Social Media, 44 Tex. Tech L. Rev. 147, 160–61 (2011) (explaining that imminence is the main obstacle to convicting violent internet posters because it is difficult to monitor whether an audience member is succumbing or will succumb in the future to violent advocacy in an online post).

9. See id. at 150–51 (describing an anti-Islamic Twitter post which caused a violent reaction in Afghanistan); Thompson v. Trump, No. 21-CV-00400, 2022 WL 503384, at *147 (D.D.C. Feb. 18, 2022) (finding that President Trump would have known that some of his supporters were prepared for violence and treated his invitation to Washington, D.C. on January 6th, 2021 as a call to action because of his social media posts).

10. See Lidsky, supra note 8, at 153 (explaining that although a Twitter post was likely to produce lawless action, there was no indication of imminent violent action because it was posted on the internet); Knechtle, supra note 5, at 570 (explaining the difficulty in the context of bomb-making instructions on the Internet).

11. See United States v. Williams, 553 U.S. 285, 298–99 (2008) (holding a statute prohibiting offers to provide or requests to obtain child pornography to be constitutional); Rice v. Paladin Enters., 128 F.3d 233, 264–65, 267 (4th Cir. 1997) (holding that a book containing concrete instructions on how to kill a man was not protected by the First Amendment because it can prepare one for lawless action without resembling abstract advocacy).

12. See Lidsky, supra note 8, at 162 (“[A] satisfactory replacement for imminence in cyber-incitement cases would focus on ensuring that the causal linkage between the speech and the harm was a direct one . . . .”).

13. See Knechtle, supra note 5, at 571 (“Perhaps the most direct way to transcend the imminence requirement is to prohibit threats of unlawful acts . . . [by] applying the ‘true threats’ doctrine to hate speech.”).

14. Dennis v. United States, 341 U.S. 494, 508 (1951).

15. Id. at 510.

16. Id. at 510–11.

17. Brandenburg, 395 U.S. at 447–48 (quotations omitted).

18. Id. at 448 (citing Noto v. United States, 367 U.S. 290, 297–98 (1961)).

19. Compare Brandenburg, 395 U.S. at 446 (explaining that a Klan member was giving a speech to organize a march on Congress where there “might have to be some revengeance taken”), with Dennis, 341 U.S. at 510–11 (finding a clear and present danger when there was “a highly organized conspiracy, with rigidly disciplined members subject to call[s]” for action, in the context of “inflammable . . . world conditions”), and Rice, 128 F.3d at 256 (holding that a book that “steels its audience . . . through exhaustively detailed instructions on the planning, commission, and concealment of criminal conduct” is not protected by the First Amendment). See also United States v. Freeman, 761 F.2d 549, 551–52 (9th Cir. 1985) (holding that speech counseling tax fraud is only protected to the extent that the speech is a general and abstract criticism of tax laws).

20. See Rice, 128 F.3d at 243 (“While even speech advocating lawlessness has long enjoyed protections under the First Amendment, it is equally well established that speech which, in its effect, is tantamount to legitimately proscribable nonexpressive conduct may itself be legitimately proscribed, punished, or regulated incidentally to the constitutional enforcement of generally applicable statutes.”).

21. Id. at 264–65.

22. Miselis, 972 F.3d at 533 (citing Rice, 128 F.3d at 243).

23. See United States v. Bagdasarian, 652 F.3d 1113, 1115 n.9 (9th Cir. 2011) (holding that online posts, although directly encouraging violence, did not satisfy the imminence requirement because they urged violence “at some indefinite future time”) (quoting Hess v. Indiana, 414 U.S. 105, 108 (1973)); Knechtle, supra note 5, at 571 (“[T]he imminence requirement also raises a question: why should the government be unable to regulate hate speech that foments violence just because the violence does not happen immediately after the speech is delivered?”).

24. See Lidsky, supra note 8, at 154 (“[The imminence requirement] makes less sense when applied to incendiary online speech captured on video, which foreseeably spurs violence almost immediately upon reaching a hostile audience thousands of miles away, even though that audience did not discover the video until days and weeks after the speech was uttered or the video was posted.”); Knechtle, supra note 5, at 570 (noting that the Department of Justice, in response to bomb-making instructions on the internet, stated to Congress that when it is foreseeable that speech will be used for criminal purposes, “imminent should be of little, if any, relevance”).

25. Dennis, 341 U.S. at 511.

26. See Virginia v. Black, 538 U.S. 343, 359 (2003) (explaining that true threats can be prohibited (citing Watts v. United States, 394 U.S. 705, 708 (1969))).

27. See Black, 538 U.S. at 359–60 (quotations omitted) (citing R.A.V. v. City of St. Paul, 505 U.S. 377, 388 (1992)). See also United States v. Dinwiddie, 76 F.3d 913, 922 n.5 (8th Cir. 1996) (“The Brandenburg test applies to laws that forbid inciting someone to use violence against a third party. It does not apply to statutes . . . that prohibit someone from directly threatening another person.”).

28. Circuit courts are split between “reasonable listener” tests, which focus on whether “an ordinary, reasonable recipient who is familiar with the context of the communication would interpret it as a threat of injury,” and “reasonable speaker” tests, which focus on whether a reasonable person would foresee their own statement as being interpreted as a threat. Compare United States v. Turner, 720 F.3d 411, 420 (2d Cir. 2013) (adopting the reasonable listener test) (cleaned up), United States v. Armel, 585 F.3d 182, 185 (4th Cir. 2009) (same), United States v. Nishnianidze, 342 F.3d 6, 15 (1st Cir. 2003) (same), United States v. Beale, 620 F.3d 856, 865 (8th Cir. 2010) (same), United States v. Morales, 272 F.3d 284, 287 (5th Cir. 2001) (same), and United States v. Jeffries, 692 F.3d 473, 480 (6th Cir. 2012) (same), with United States v. Magleby, 241 F.3d 1306, 1311–13 (10th Cir. 2001) (adopting the reasonable speaker test), and United States v. Kosma, 951 F.2d 549, 556–57 (3d Cir. 1991) (same).

29. See Planned Parenthood of the Columbia/Willamette, Inc. v. Am. Coal. of Life Activists, 290 F.3d 1058, 1074–75 n.7 (9th Cir. 2002) (noting that all circuits use an objective test and the difference between the “reasonable listener” and “reasonable speaker” test “does not appear to matter much”).

30. See id. at 1079.

31. Brandenburg, 395 U.S. at 447–48 (citing Noto, 367 U.S. at 297–98).

32. Rice, 128 F.3d at 264–65.

33. See Knechtle, supra note 5, at 578.