by Matthew Uvas*

The free­dom of speech pro­tect­ed by the First Amend­ment of the Unit­ed States Con­sti­tu­tion has long been under­stood, by both the Supreme Court and the pub­lic, to be one of the “fun­da­men­tal per­son­al rights and lib­er­ties which are pro­tect­ed by the Four­teenth Amend­ment from inva­sion by state action.”1 How­ev­er, the Supreme Court has also rec­og­nized that the First Amend­ment does not pro­tect all speech under all cir­cum­stances; there are lim­it­ed class­es of speech that the gov­ern­ment may reg­u­late and pun­ish with­out rais­ing con­sti­tu­tion­al con­cerns.2 Thus, state gov­ern­ments must bal­ance com­pet­ing inter­ests: they must inter­pret the Four­teenth Amendment’s restric­tions on gov­ern­ment action as broad­ly as pos­si­ble, but may lim­it those class­es of speech which “are of such slight social val­ue as a step to truth that any ben­e­fit that may be derived from them is clear­ly out­weighed by the social inter­est in order and moral­i­ty.”3

One such class of speech is incite­ment, defined as “[t]he act of per­suad­ing anoth­er per­son to com­mit a crime.”4 This class is par­tic­u­lar­ly impor­tant to the reg­u­la­tion of hate speech, giv­en the propen­si­ty of hate speech to incite and cause vio­lence.5 The mod­ern test for incite­ment was estab­lished in Bran­den­burg v. Ohio where the Supreme Court held that “advo­ca­cy of the use of force or of law vio­la­tion” can only be reg­u­lat­ed if that advo­ca­cy is “direct­ed to incit­ing or pro­duc­ing immi­nent law­less action and is like­ly to incite or pro­duce such action.”6 In Bran­den­burg, the Court inval­i­dat­ed an Ohio statute that pun­ished peo­ple for, among oth­er things, “advocat[ing] or teach[ing] the duty, neces­si­ty, or pro­pri­ety of vio­lence as a means of accom­plish­ing indus­tri­al or polit­i­cal reform.”7

Bran­den­burg’s lim­i­ta­tion on incite­ment regulation—and its immi­nence require­ment in particular—poses sig­nif­i­cant chal­lenges for reg­u­lat­ing dan­ger­ous speech on the inter­net.8 In the age of social media, it is increas­ing­ly pos­si­ble for some­one to spread hate­ful or offen­sive speech and cause vio­lence in anoth­er area of the world.9 How­ev­er, giv­en that it is near­ly impos­si­ble to deter­mine if and when a lis­ten­er will react to a post on the inter­net, inter­net posts—despite the like­li­hood or fore­see­abil­i­ty of violence—would con­sti­tute pro­tect­ed speech under Bran­den­burg’s con­cep­tion of incite­ment.10

This Con­tri­bu­tion argues that an over­looked aspect of incite­ment jurispru­dence is the dis­tinc­tion between the abstract advo­ca­cy of ille­gal­i­ty present in Bran­den­burg and more con­crete pro­pos­als to engage in ille­gal activ­i­ty.11 Online posts and hate speech that lead to vio­lence are more sim­i­lar to con­crete pro­pos­als than abstract advo­ca­cy, there­fore the immi­nence require­ment should be mod­i­fied by courts ana­lyz­ing pur­port­ed incite­ment in this con­text.12 The his­to­ry of incite­ment reg­u­la­tion, cir­cuit court hold­ings, and the reg­u­la­tion of oth­er class­es of unpro­tect­ed speech (like true threats) sug­gests that a con­tex­tu­al approach to judi­cial incite­ment analy­sis, which would con­sid­er ele­ments such as pri­or his­to­ry of vio­lence or com­po­si­tion of the tar­get audi­ence, may be bet­ter suit­ed to the inter­net con­text than view­ing hate­ful or incen­di­ary inter­net posts through the lens of immi­nence.13

Before Bran­den­burg, the Court held that speech could be reg­u­lat­ed by the gov­ern­ment where there was “a clear and present dan­ger that a sub­stan­tial pub­lic evil will result there­from.”14 The Court in Den­nis v. Unit­ed States, uphold­ing a con­vic­tion for a con­spir­a­cy to over­throw the gov­ern­ment, reject­ed the notion that suc­cess or prob­a­bil­i­ty of suc­cess was the cri­te­ri­on for whether a clear and present dan­ger exist­ed, stat­ing that “[t]he mere fact that . . . peti­tion­ers’ activ­i­ties did not result in an attempt to over­throw the Gov­ern­ment by force and vio­lence is of course no answer to the fact that there was a group that was ready to make the attempt.”15 Rather, the Court held that where there exist­ed “the ingre­di­ents of the reaction”—the for­ma­tion of a con­spir­a­cy, mem­bers of a group will­ing to over­throw the gov­ern­ment, and “the inflam­ma­ble nature of world conditions”—the gov­ern­ment need not wait until “the cat­a­lyst is added” to con­sti­tu­tion­al­ly restrain the con­spir­a­cy.16 Yet, the Court lat­er held in Bran­den­burg that “advocat[ing] or teach[ing] the duty, neces­si­ty, or pro­pri­ety of vio­lence as a means of accom­plish­ing indus­tri­al or polit­i­cal reform” could not be reg­u­lat­ed unless it is “direct­ed to incit­ing or pro­duc­ing immi­nent law­less action and is like­ly to incite or pro­duce such action.”17

Impor­tant­ly, the Court in Bran­den­burg held that “mere abstract teach­ing,” which it pro­tect­ed from reg­u­la­tion, “is not the same as prepar­ing a group for vio­lent action and steel­ing it to such action.”18 This dis­tinc­tion implies that speech prompt­ing ille­gal action more spe­cif­ic than abstract advo­ca­cy does not fall under the First Amendment’s pro­tec­tion.19

More impor­tant­ly for the inter­net con­text, this Bran­den­burg dis­tinc­tion also indi­cates that the require­ments for immi­nent law­less action and a like­li­hood to pro­duce such action is lim­it­ed to the con­text of abstract advo­ca­cy.20 In Rice v. Pal­adin Enter­pris­es, Inc., the Fourth Cir­cuit fol­lowed this guid­ance and held that a book con­tain­ing con­crete instruc­tions on how to kill a man was not pro­tect­ed by the First Amend­ment because it can pre­pare one for law­less action with­out resem­bling advo­ca­cy.21 This approach essen­tial­ly rec­og­nizes “a sec­ond cat­e­go­ry of unpro­tect­ed speech inher­ent in that of incite­ment, which may be pro­scribed with­out regard to whether it’s direct­ed and like­ly to pro­duce immi­nent law­less­ness.”22

There­fore, rather than focus­ing on the Bran­den­burg immi­nence require­ment, the more appro­pri­ate stan­dard by which to eval­u­ate poten­tial­ly dan­ger­ous online posts is this sec­ond cat­e­go­ry of incite­ment. In an era where social move­ments are large­ly fur­thered online and speech is con­sumed on demand, a focus on immi­nence is ill-suit­ed to address poten­tial harm from online posts.23 Incite­ment on the inter­net is just as dan­ger­ous as incite­ment in per­son, if not more dan­ger­ous giv­en the poten­tial audi­ence size; the lag time between the post­ing and a vio­lent reac­tion should be of lit­tle rel­e­vance.24 A more appro­pri­ate analy­sis would focus on the con­text of the speech to deter­mine if the speech con­sti­tutes a con­crete pro­pos­al of unlaw­ful action and would return to the inquiry in Den­nis of whether there exist “ingre­di­ents of the reac­tion” which would present a dan­ger.25

Courts already use this type of analy­sis when it comes to true threats, a sep­a­rate class of unpro­tect­ed speech.26 The reg­u­la­tion of true threats, unlike incite­ment, is meant to pro­tect indi­vid­u­als, rather than soci­ety at large, from “the fear of vio­lence and from the dis­rup­tion that fear engen­ders” as well as “from the pos­si­bil­i­ty that the threat­ened vio­lence will occur.”27 While cir­cuit courts are split on the appro­pri­ate test to use,28 the con­text and effect of a poten­tial threat on the lis­ten­er are con­sid­ered under both stan­dards.29 For exam­ple, in Planned Par­ent­hood of the Columbia/Willamette, Inc. v. Amer­i­can Coali­tion of Life Activists, the Ninth Cir­cuit held that—given a his­to­ry of killings asso­ci­at­ed with post­ing “want­ed” posters of pro-choice advocates—the post­ing of sim­i­lar “want­ed” posters amount­ed to true threats even if they began as polit­i­cal state­ments.30

While the posters in Planned Par­ent­hood were aimed at spe­cif­ic indi­vid­u­als, the analy­sis could be help­ful when reg­u­lat­ing inter­net hate speech in anal­o­gous cir­cum­stances. Sup­pose that the “want­ed” posters were inter­net images that did not tar­get spe­cif­ic indi­vid­u­als but pro-choice advo­cates as a group. Under Bran­den­burg, the speech could not be reg­u­lat­ed: on its face, the posters seem to be mere advo­ca­cy of vio­lence for polit­i­cal change, and there is no proof of immi­nent vio­lence.31 How­ev­er, analy­sis of con­text (in this case, the killings that fol­lowed from essen­tial­ly iden­ti­cal posters) could reveal that the post was not abstract advo­ca­cy but rather a spe­cif­ic pro­pos­al of law­less action, more sim­i­lar to the hit­man instruc­tion book in Rice.32 As the true threats analy­sis rec­og­nizes, courts should aim to accu­rate­ly assess the offend­ing online speech’s pur­pose by look­ing to the con­text in which it was employed, rather than turn­ing to often unat­tain­able evi­dence of immi­nence. This under­stand­ing of incite­ment pro­vides a legal rem­e­dy for peo­ple threat­ened or intim­i­dat­ed by hate speech and over­comes the unnec­es­sary bur­den of demon­strat­ing immi­nence when pre­vent­ing or pun­ish­ing incite­ment.33

The pre­ven­tion of incite­ment is of tan­ta­mount impor­tance, par­tic­u­lar­ly when online hate speech has become more preva­lent. The cur­rent test pro­vid­ed by the Court in Bran­den­burg is both mis­un­der­stood and ill-suit­ed to a dig­i­tal world. To pre­vent the harm asso­ci­at­ed with incite­ment, courts should take care to dis­tin­guish between abstract advo­ca­cy and con­crete pro­pos­als to vio­lence and, in doing so, should rely on the frame­work uti­lized in true threat cas­es, rather than Bran­den­burg’s immi­nence require­ment, to prop­er­ly eval­u­ate the con­text in which incit­ing com­ments are made.


* Matthew Uvas is a J.D. Can­di­date (2023) at New York Uni­ver­si­ty School of Law. This Con­tri­bu­tion arose from the prob­lem pre­sent­ed at the 2022 Nation­al Telecom­mu­ni­ca­tions and Tech­nol­o­gy Com­pe­ti­tion host­ed by the Catholic Uni­ver­si­ty of Amer­i­ca Colum­bus School of Law. The ques­tion pre­sent­ed was whether an online social media platform’s “Hate­ful Con­duct Algo­rithm” was con­sti­tu­tion­al in its reg­u­la­tion of lan­guage and posts on the plat­form that could have incit­ed law­less action, true threats, or fight­ing words. This Con­tri­bu­tion presents a dis­til­la­tion of argu­ments from the com­pe­ti­tion and does not nec­es­sar­i­ly rep­re­sent the views of the author.

1. Chap­lin­sky v. New Hamp­shire, 315 U.S. 568, at 570­–71 (1942) (quot­ing Lovell v. City of Grif­fin, 303 U.S. 444, 450 (1938)).

2. See Chap­lin­sky, 315 U.S. at 571–72 (“These include the lewd and obscene, the pro­fane, the libelous, and the insult­ing or ‘fight­ing’ words – those which by their very utter­ance inflict injury or tend to incite an imme­di­ate breach of the peace.”).

3. Id. at 572.

4. See Incite­ment, Black’s Law Dic­tio­nary (11th ed. 2019); Unit­ed States v. Mis­elis, 972 F.3d 518, 532 (4th Cir. 2020).

5. See John C. Knech­tle, When to Reg­u­late Hate Speech, 110 Penn St. L. Rev. 539, 546–48 (2006) (illus­trat­ing the rela­tion­ship between hate speech and vio­lence in Bosnia and Rwanda).

6. Bran­den­burg v. Ohio, 395 U.S. 444, 447 (1969) (empha­sis added).

7. Id. at 448 (quo­ta­tion omitted).

8. See Lyris­sa Bar­nett Lid­sky, Incen­di­ary Speech and Social Media, 44 Tex. Tech L. Rev. 147, 160–61 (2011) (explain­ing that immi­nence is the main obsta­cle to con­vict­ing vio­lent inter­net posters because it is dif­fi­cult to mon­i­tor whether an audi­ence mem­ber is suc­cumb­ing or will suc­cumb in the future to vio­lent advo­ca­cy in an online post).

9. See id. at 150–51 (describ­ing an anti-Islam­ic Twit­ter post which caused a vio­lent reac­tion in Afghanistan); Thomp­son v. Trump, No. 21-CV-00400, 2022 WL 503384, at *147 (D.D.C. Feb. 18, 2022) (find­ing that Pres­i­dent Trump would have known that some of his sup­port­ers were pre­pared for vio­lence and treat­ed his invi­ta­tion to Wash­ing­ton, D.C. on Jan­u­ary 6th, 2021 as a call to action because of his social media posts).

10. See Lid­sky, supra note 8, at 153 (explain­ing that although a Twit­ter post was like­ly to pro­duce law­less action, there was no indi­ca­tion of immi­nent vio­lent action because it was post­ed on the inter­net); Knech­tle, supra note 5, at 570 (explain­ing the dif­fi­cul­ty in the con­text of bomb-mak­ing instruc­tions on the Internet).

11. See Unit­ed States v. Williams, 553 U.S. 285, 298–99 (2008) (hold­ing a statute pro­hibit­ing offers to pro­vide or requests to obtain child pornog­ra­phy to be con­sti­tu­tion­al); Rice v. Pal­adin Enters., 128 F.3d 233, 264–65, 267 (4th Cir. 1997) (hold­ing that a book con­tain­ing con­crete instruc­tions on how to kill a man was not pro­tect­ed by the First Amend­ment because it can pre­pare one for law­less action with­out resem­bling abstract advocacy).

12. See Lid­sky, supra note 8, at 162 (“[A] sat­is­fac­to­ry replace­ment for immi­nence in cyber-incite­ment cas­es would focus on ensur­ing that the causal link­age between the speech and the harm was a direct one . . . .”).

13. See Knech­tle, supra note 5, at 571 (“Per­haps the most direct way to tran­scend the immi­nence require­ment is to pro­hib­it threats of unlaw­ful acts . . . [by] apply­ing the ‘true threats’ doc­trine to hate speech.”).

14. Den­nis v. Unit­ed States, 341 U.S. 494, 508 (1951).

15. Id. at 510.

16. Id. at 510–11.

17. Bran­den­burg, 395 U.S. at 447–48 (quo­ta­tions omitted).

18. Id. at 448 (cit­ing Noto v. Unit­ed States, 367 U.S. 290, 297–98 (1961)).

19. Com­pare Bran­den­burg, 395 U.S. at 446 (explain­ing that a Klan mem­ber was giv­ing a speech to orga­nize a march on Con­gress where there “might have to be some revengeance tak­en”), with Den­nis, 341 U.S. at 510–11 (find­ing a clear and present dan­ger when there was “a high­ly orga­nized con­spir­a­cy, with rigid­ly dis­ci­plined mem­bers sub­ject to call[s]” for action, in the con­text of “inflam­ma­ble . . . world con­di­tions”), and Rice, 128 F.3d at 256 (hold­ing that a book that “steels its audi­ence . . . through exhaus­tive­ly detailed instruc­tions on the plan­ning, com­mis­sion, and con­ceal­ment of crim­i­nal con­duct” is not pro­tect­ed by the First Amend­ment). See also Unit­ed States v. Free­man, 761 F.2d 549, 551–52 (9th Cir. 1985) (hold­ing that speech coun­sel­ing tax fraud is only pro­tect­ed to the extent that the speech is a gen­er­al and abstract crit­i­cism of tax laws).

20. See Rice, 128 F.3d at 243 (“While even speech advo­cat­ing law­less­ness has long enjoyed pro­tec­tions under the First Amend­ment, it is equal­ly well estab­lished that speech which, in its effect, is tan­ta­mount to legit­i­mate­ly pro­scrib­able non­ex­pres­sive con­duct may itself be legit­i­mate­ly pro­scribed, pun­ished, or reg­u­lat­ed inci­den­tal­ly to the con­sti­tu­tion­al enforce­ment of gen­er­al­ly applic­a­ble statutes.”).

21. Id. at 264–65.

22. Mis­elis, 972 F.3d at 533 (cit­ing Rice, 128 F.3d at 243).

23. See Unit­ed States v. Bag­dasar­i­an, 652 F.3d 1113, 1115 n.9 (9th Cir. 2011) (hold­ing that online posts, although direct­ly encour­ag­ing vio­lence, did not sat­is­fy the immi­nence require­ment because they urged vio­lence “at some indef­i­nite future time”) (quot­ing Hess v. Indi­ana, 414 U.S. 105, 108 (1973)); Knech­tle, supra note 5, at 571 (“[T]he immi­nence require­ment also rais­es a ques­tion: why should the gov­ern­ment be unable to reg­u­late hate speech that foments vio­lence just because the vio­lence does not hap­pen imme­di­ate­ly after the speech is delivered?”).

24. See Lid­sky, supra note 8, at 154 (“[The immi­nence require­ment] makes less sense when applied to incen­di­ary online speech cap­tured on video, which fore­see­ably spurs vio­lence almost imme­di­ate­ly upon reach­ing a hos­tile audi­ence thou­sands of miles away, even though that audi­ence did not dis­cov­er the video until days and weeks after the speech was uttered or the video was post­ed.”); Knech­tle, supra note 5, at 570 (not­ing that the Depart­ment of Jus­tice, in response to bomb-mak­ing instruc­tions on the inter­net, stat­ed to Con­gress that when it is fore­see­able that speech will be used for crim­i­nal pur­pos­es, “immi­nent should be of lit­tle, if any, relevance”).

25. Den­nis, 341 U.S. at 511.

26. See Vir­ginia v. Black, 538 U.S. 343, 359 (2003) (explain­ing that true threats can be pro­hib­it­ed (cit­ing Watts v. Unit­ed States, 394 U.S. 705, 708 (1969))).

27. See Black, 538 U.S. at 359–60 (quo­ta­tions omit­ted) (cit­ing R.A.V. v. City of St. Paul, 505 U.S. 377, 388 (1992)). See also Unit­ed States v. Din­wid­die, 76 F.3d 913, 922 n.5 (8th Cir. 1996) (“The Bran­den­burg test applies to laws that for­bid incit­ing some­one to use vio­lence against a third par­ty. It does not apply to statutes . . . that pro­hib­it some­one from direct­ly threat­en­ing anoth­er person.”).

28. Cir­cuit courts are split between “rea­son­able lis­ten­er” tests, which focus on whether “an ordi­nary, rea­son­able recip­i­ent who is famil­iar with the con­text of the com­mu­ni­ca­tion would inter­pret it as a threat of injury,” and “rea­son­able speak­er” tests, which focus on whether a rea­son­able per­son would fore­see their own state­ment as being inter­pret­ed as a threat. Com­pare Unit­ed States v. Turn­er, 720 F.3d 411, 420 (2d Cir. 2013) (adopt­ing the rea­son­able lis­ten­er test) (cleaned up), Unit­ed States v. Armel, 585 F.3d 182, 185 (4th Cir. 2009) (same), Unit­ed States v. Nish­ni­an­idze, 342 F.3d 6, 15 (1st Cir. 2003) (same), Unit­ed States v. Beale, 620 F.3d 856, 865 (8th Cir. 2010) (same), Unit­ed States v. Morales, 272 F.3d 284, 287 (5th Cir. 2001) (same), and Unit­ed States v. Jef­fries, 692 F.3d 473, 480 (6th Cir. 2012) (same), with Unit­ed States v. Magle­by, 241 F.3d 1306, 1311–13 (10th Cir. 2001) (adopt­ing the rea­son­able speak­er test), and Unit­ed States v. Kos­ma, 951 F.2d 549, 556–57 (3d Cir. 1991) (same).

29. See Planned Par­ent­hood of the Columbia/Willamette, Inc. v. Am. Coal. of Life Activists, 290 F.3d 1058, 1074–75 n.7 (9th Cir. 2002) (not­ing that all cir­cuits use an objec­tive test and the dif­fer­ence between the “rea­son­able lis­ten­er” and “rea­son­able speak­er” test “does not appear to mat­ter much”).

30. See id. at 1079.

31. Bran­den­burg, 395 U.S. at 447–48 (cit­ing Noto, 367 U.S. at 297–98).

32. Rice, 128 F.3d at 264–65.

33. See Knech­tle, supra note 5, at 578.