By Mara Clemons*

Political deepfakes occupy the outer limits of First Amendment protection by combining political expression with uniquely persuasive deception. Their capacity to mislead voters erodes trust in authentic media and challenges the traditional assumption that false political speech can be reliably corrected through the marketplace of ideas. Recent judicial treatment of state-level deepfake regulations illustrates the narrow constitutional space available for regulation. Statutes regulating political deepfakes should not receive strict scrutiny protection when analyzed by courts for First Amendment violations because the defining harm arises not from the communication of false ideas, but from the deliberate deception of speaker identity. By falsely representing who is speaking, deepfakes distort the informational cues that enable voters to assess credibility and accountability, placing them outside the core category of traditional political speech that the First Amendment is intended to protect. Regulations of political deepfakes target this deceptive mechanism, rather than suppressing ideas or viewpoints, and should therefore be evaluated under intermediate scrutiny.


Deepfakes pose a novel question about how far the law will go to protect speech. Lying at the outer bounds of the First Amendment’s protections, deepfakes are audio or visual media created or altered through artificial intelligence (“AI”) that falsely but realistically depict individuals engaging in speech or conduct that never occurred.1 Political deepfakes undermine informed democratic participation by spreading misinformation and eroding trust in authentic media by fueling the liar’s dividend – the advantage political figures gain when the existence of deepfakes allows them to plausibly dismiss authentic evidence as fabricated.2 These harms have prompted legislative efforts to regulate political deepfakes. Because these laws target deception about a speaker’s identity rather than the dissemination of false ideas, they should not be subject to strict scrutiny. The high believability of deepfakes’ identity-based deception harms candidates, voters, and the electoral process by corrupting traditional mechanisms for evaluating speech. Accordingly, regulations that target deepfakes should be reviewed under intermediate scrutiny to allow the government to carefully regulate these harms and to avoid the near-certain condemnation deepfake regulations would otherwise receive under strict scrutiny.

First Amendment challenges have arisen against several state laws regulating election-related deepfakes.3 Notably, California passed a deepfake election law regulating a wide range of election-related content deemed “materially deceptive,” authorizing any recipient of such content to sue for damages.4 However, the Eastern District of California granted a preliminary injunction prohibiting enforcement of the statute because it was not sufficiently narrowly tailored, as required by strict scrutiny.5 Despite recognizing the importance of protecting the integrity of elections, the court found that the statute was overbroad and captured protected content beyond its “legitimate sweep.”6 Later, California’s law was permanently enjoined, again as a result of the court’s strict scrutiny analysis.7 First Amendment jurisprudence might suggest that laws regulating political deepfakes are subject to strict scrutiny because they affect political speech broadly. But that standard is ill-suited to the problem: it affords excessive protection to deceptive communications while providing insufficient protection to those misled by deepfakes.

The framework for determining whether a statute complies with the First Amendment is well-established. First, the government must be regulating speech.8 Media that communicates an idea, whether the medium is books, movies, or even video games, is considered speech.9 The dissemination of information also constitutes speech.10 Second, if the government is regulating speech, the court considers whether that speech is protected. Unprotected categories of speech include incitement,11 fighting words,12 defamation,13 fraud,14 obscenity,15 and speech as part of criminal conduct.16 In contrast, lawful political speech occupies “the core of what the First Amendment is designed to protect.”17 If the government is regulating protected speech, the court then determines what level of scrutiny to apply to the law.

When evaluating government action implicating the First Amendment, the level of scrutiny generally turns on whether the regulation is content-based or content-neutral.18 A law is content-based and subject to strict scrutiny if it “applies to particular speech because of the topic discussed or the idea or message expressed.”19 In contrast, a content-neutral law serves objectives unrelated to content, as illustrated in Ward v. Rock Against Racism, where the Court held that city sound guidelines for park concerts were content-neutral because they were designed to control noise levels.20 Strict scrutiny is applied to content-based laws because it is presumptively unconstitutional for the government to regulate speech based on the expressed message.21 Strict scrutiny therefore requires laws to be narrowly tailored to serve a compelling government interest using the least restrictive means.22 By contrast, intermediate scrutiny is applied to content-neutral laws and is a less rigorous standard because it does not require the government to use the least restrictive means of achieving its objective.23 A regulation satisfies intermediate scrutiny if the “regulation promotes a substantial government interest that would be achieved less effectively absent the regulation.”24

There is no majority consensus for whether a law regulating false speech would be subject to strict or intermediate scrutiny. In United States v. Alvarez, the Supreme Court struck down the Stolen Valor Act, which prohibited lying about receiving military honors.25 In doing so, the Court did not reach a consensus as to what level of scrutiny applies to statutes regulating false speech.26 The Alvarez plurality concluded as a general rule that false speech is protected under the First Amendment and applied strict scrutiny to the Stolen Valor Act.27 The concurrence argued that intermediate scrutiny should apply to regulations of false factual statements because the risk of suppressing valuable ideas is reduced when the speech concerns objectively verifiable facts.28 However, the Justices in the plurality and concurrence agreed on a few acceptable content-based speech regulations, and those exceptions are binding on lower courts.29

First, the Alvarez plurality and concurrence agreed that regulations of false speech are permissible when their purpose is to protect the integrity of government processes.30 This includes laws targeting perjury and lying to or impersonating a government official.31 Second, they also agreed that prohibiting false speech that causes a legally cognizable harm, such as defamation and fraud, generally does not create constitutional issues.32 The Stolen Valor Act did not fit any of these exceptions because it punished false statements regardless of whether the lie was made to secure a material advantage and regardless of the statement’s lack of harmful effects.33 Political deepfakes, by contrast, produce harms far closer to those that justified exceptions for certain false-speech regulations in Alvarez, and the same reasoning therefore supports permitting their regulation.

Unlike a law that targets falsity alone, laws regulating deepfakes target the nonconsensual impersonation, not the message being conveyed. Because deepfakes are highly believable,34 it is the credibility of the impersonation itself, not the misconstrued message, that threatens the democratic process. An individual can share whatever false political message they please, but doing so under the guise of a representative for public office prevents voters from making informed choices about candidates. For example, deepfakes could harm elections in at least two ways. First, they may distort voters’ perceptions of a political figure’s positions or credibility by falsely depicting the figure announcing support for a controversial policy, making offensive remarks, or confessing to a scandal such as bribery. Second, deepfakes may mislead voters about the election itself by falsely portraying a candidate conceding defeat before polls close or by spreading inaccurate information about voter eligibility, polling locations, or voting times. Because nonconsensual impersonations cause tangible harms to electoral integrity, laws regulating deepfakes mirror the reasoning in Alvarez for creating exceptions for certain acceptable false speech regulations.

When deepfakes inflict a legally cognizable harm, the reasoning of both the plurality and concurrence in Alvarez permits regulatory intervention.35 Here, the harm caused by political deepfakes resembles the harms produced by defamation and fraud, two forms of speech which may be regulated under the First Amendment.36 By falsely depicting political figures, deepfakes cause harm akin to defamation by distorting voters’ perceptions and inflicting reputational damage. Additionally, deepfakes injure voters by inducing reliance on false representations, a harm analogous to fraud. Because deepfakes inflict harms substantially similar to the legally cognizable harms the Supreme Court has already recognized as warranting reduced First Amendment protection, legislatures should be able to regulate them without the constraints of strict scrutiny. Even if deepfakes do not fit neatly within fraud or defamation, the similarity of the harms justifies a lower level of scrutiny. And because Alvarez produced no majority rule governing false speech and political deepfakes depart fundamentally from traditional political expression, intermediate scrutiny should apply.

The undue credibility attributed to deepfakes undermines the argument that they should be protected to the same degree as ordinary political speech. False political speech was historically entitled to protection to safeguard debate and to avoid burdening speech with a requirement of truth.37 However, false political speech used to be something citizens could identify and evaluate themselves, so it was best left unregulated.38 As Justice Holmes stated, “the best test of truth is the power of the thought to get itself accepted in the competition of the market . . . .”39 Deepfakes cripple the viability of this argument because AI has increased the power of false information to be accepted as true in the marketplace.40 Thus, deepfakes should not be entitled to the same protections that other false speech receives because the underlying rationale that the marketplace of ideas will identify this speech as false is far less compelling for highly believable political deepfakes. Instead, regulations of deepfakes should be subject to intermediate scrutiny in recognition of citizens’ limited ability to identify their falsity and evaluate their message on that basis.

The Eastern District of California’s decision in Kohls v. Bonta to strike down California’s political deepfake regulation under strict scrutiny further illustrates why intermediate scrutiny is more appropriate for deepfakes. First, strict scrutiny’s stringent narrow tailoring requirement typically condemns regulations of false speech because courts prefer counterspeech—responsive speech intended to rebut, correct, contextualize, or undermine the message or effects of another speaker’s speech—to a regulation that prohibits false speech.41 In Kohls v. Bonta, the district court concluded that California failed to demonstrate that counterspeech options like fact-checking were not an effective alternative to the government’s imposition.42 But the court failed to consider that for deepfakes, counterspeech is not an appropriate remedy because deepfakes deceive by capitalizing on the instinctive trust people place in audio and visual information.43 People cannot reliably detect deepfakes and in fact overestimate their own ability to do so.44 As a result, relying on counterspeech to weed out inauthentic videos is an ineffective way to protect elections from deceptive deepfakes. Deepfakes are sticky, such that their false impressions persist even after being debunked.45 Further, in an increasing politically polarized world, some are skeptical of fact-checking organizations.46 Where deepfakes defeat counterspeech by design, courts insistence on counterspeech as a less restrictive alternative when applying strict scrutiny imposes an unrealistic burden on regulation. Intermediate scrutiny better accommodates this reality by permitting regulation of deepfakes.

Second, the court in Kohls erred by holding that existing causes of action, such as defamation and privacy torts, are a less restrictive alternative to California’s attempt to regulate deepfakes because they provide recourse to victims of deepfakes.47 But in addition to the defamatory harm to candidates, discussed above, the government and voters suffer harm to electoral integrity when deepfakes spread false information about important election information, such as candidate positions and poll sites. For example, on the eve of the 2024 New Hampshire primary, robocalls impersonating President Biden’s voice told thousands of voters to “save your vote” and not participate in the election.48 Although the impact of these robocalls on voter turnout is quantitatively unclear, particularly because more experienced recipients could identify the voice imitation as fake,49 more sophisticated deepfakes could sway large numbers of voters to act in accordance to the command, especially those voters most susceptible to deception.50 Remedies in tort for the individual subject of the deepfake cannot provide recourse for any grievous consequences to election outcomes onset by political deepfakes. By concluding that a statute is not narrowly tailored if existing causes of action offer remedies to only one affected party, strict scrutiny unduly constrains the government’s ability to regulate other identifiable and significant harms.

Third, the Kohls court erred by listing several narrower possible constructions of the statute to conclude that the statute was not narrowly tailored.51 The narrower constructions included application only to demonstrably false factual statements or to speech that causes voter intimidation or coercion.52 However, these limitations would significantly constrain the government’s discretion to identify and regulate broader harms from false impersonations beyond those narrow categories. A deepfake could still meaningfully harm voters’ perception of a candidate by misrepresenting their character, judgment or intelligence, without directly intimidating voters or making a demonstrably false claim. For example, a deepfake could make a candidate appear confused or emotionally unstable without necessarily presenting false information. Under intermediate scrutiny, a court could still conclude that the statute is too broad, but it would not require the narrowest possible construction and would thereby give the government more space to regulate.

Unlike the demanding standards imposed by strict scrutiny, a regulation satisfies intermediate scrutiny if the “regulation promotes a substantial government interest that would be achieved less effectively absent the regulation.”53 Intermediate scrutiny does not require the regulation to use the least restrictive means of attaining the government interest, although it should not be significantly broader than needed.54 In cases of false speech, courts must consider the proportional fit of the regulation to the government’s interest.55

Like strict scrutiny, intermediate scrutiny requires a substantial government interest.56 Both protecting the integrity of elections and protecting public confidence in the integrity of democratic institutions are compelling government interests served by states’ deepfake regulations.57 The government’s interest in preventing fraud “carries special weight during election campaigns when false statements, if credited, may have serious adverse consequences for the public at large.”58 The Supreme Court found the government has a compelling interest in “protecting voters from confusion and undue influence” when it upheld regulations limiting political activity near polls.59

Next, under intermediate scrutiny, regulations must be proportional to the government’s interest.60 The proportionality of the regulation to the interest is where intermediate scrutiny differs from strict scrutiny. Under strict scrutiny, a regulation must be narrowly tailored to the least restrictive means of advancing a compelling government interest, so courts will focus their analysis on whether there is a less restrictive alternative.61 Instead, intermediate scrutiny’s proportionality requirement allows courts to assess whether a regulation meaningfully advances important governmental interests without burdening substantially more speech than necessary,62 rather than demanding the single least speech-restrictive alternative.

In Alvarez, the Stolen Valor Act did not survive Justice Breyer’s application of intermediate scrutiny because it was enacted to protect the integrity of military awards but its sweep was far broader than necessary to achieve its stated purpose.63 The Act prevented individuals from lying in almost any conceivable context, even where there would be very little harm to the integrity of military awards.64 As a result, the government’s interest could have been achieved through significantly less burdensome means. By contrast, political deepfake regulations would survive intermediate scrutiny under Justice Breyer’s standard so long as they contain limiting features that restrict the prohibition to deepfakes likely to harm electoral integrity. Further, Justice Breyer’s rationale for applying intermediate scrutiny to the Stolen Valor Act—namely that it regulated easily verifiable factual statements—applies here as well, because regulating deepfakes targets the verifiably false impersonation of a speaker rather than the message being conveyed. Intermediate scrutiny’s proportionality analysis thus yields better outcomes than strict scrutiny’s reliance on less restrictive alternatives, which cannot meaningfully mitigate the rapid and extensive harms caused by deepfakes. Intermediate scrutiny nonetheless preserves robust constitutional protections by requiring legislatures to pursue substantial governmental interests through narrow regulations, while relieving them of the impracticable obligation to identify the single least restrictive alternative in a rapidly evolving technological environment.

Political deepfakes expose a growing mismatch between existing First Amendment doctrine and emerging forms of AI manipulated speech. While courts are right to guard against government control of political ideas, extending strict scrutiny to deepfake regulation treats deceptive impersonation as though it were ordinary political speech. Strict scrutiny’s near guarantee of condemnation renders the government functionally incapable of regulating political deepfakes at all, even in the face of demonstrable threats to election integrity. Intermediate scrutiny therefore provides the best framework because it preserves First Amendment protections while allowing the government to meaningfully respond to harms that cannot realistically be cured by speech alone.


* Mara Clemons is a J.D. Candidate (2026) at New York University School of Law. This Contribution is a commentary on the problem at the 2025 UCLA School of Law Cybercrimes Moot Court Competition. One of the questions presented was whether a law prohibiting deepfakes intended to change the outcome of an election violated the First Amendment. The problem invited consideration of what standard of scrutiny should apply to deepfakes after United States v. Alvarez. This Contribution distills one side of the argument, and the views expressed herein do not necessarily represent the author’s views.

 

1. Deepfake, Oxford English Dictionary (2024).

2. Josh A. Goldstein & Andrew Lohn, Deepfakes, Elections, and Shrinking the Liar’s Dividend, Brennan Ctr. for Just. (Jan. 23, 2024), https://www.brennancenter.org/our-work/research-reports/deepfakes-elections-and-shrinking-liars-dividend.

3. See e.g. Kohls v. Bonta (“Kohls I”), 752 F.Supp.3d 1187, 1195–96 (E.D. Cal. 2024) (granting a preliminary injunction prohibiting enforcement of California’s deepfake election law); Kohls v. Ellison, 166 F.4th 728, 733–34 (8th Cir. 2026) (affirming the district court’s denial of a preliminary injunction against Minnesota’s deepfake election law because plaintiff delayed in seeking relief, but not reaching the merits of the constitutional claim).

4. See Kohls v. Bonta (“Kohls II”), 797 F.Supp.3d 1177, 1182 (E.D. Cal. 2025).

5. See Kohls I, 752 F.Supp.3d at 1195–96.

6. See id.

7. See Kohls II, 797 F.Supp.3d at 1182.

8. See Brown v. Ent. Merch. Ass’n, 564 U.S. 786, 790 (2011).

9. See id.

10. See Bartnicki v. Vopper, 532 U.S. 514, 526–27 (2001) (recognizing that disclosing or publishing information constitutes speech).

11. Brandenburg v. Ohio, 395 U.S. 444, 447–49 (1969) (acknowledging that the First Amendment protects advocacy of the use of force unless the speech is intended to incite imminent lawless action and is likely to produce such action).

12. Chaplinsky v. New Hampshire, 315 U.S. 568, 572 (1942).

13. Beauharnais v. Illinois, 343 U.S. 250, 254–55 (1952).

14. Va. State Bd. of Pharmacy v. Va. Citizens Consumer Council, Inc., 425 U.S. 748, 771 (1976).

15. Roth v. United States, 354 U.S. 476, 483 (1957).

16. Giboney v. Empire Storage & Ice Co., 336 U.S. 490, 498 (1949).

17. Virginia v. Black, 538 U.S. 343, 365 (2003).

18. See Reed v. Town of Gilbert, 576 U.S. 155, 163–64 (2015).

19. Id.

20. See Ward v. Rock Against Racism, 491 U.S. 781, 791–92 (1989) (finding a city’s guidelines for sound at events at a public park are not related to content but their “desire to control noise levels”).

21. See R.A.V. v. St. Paul, 505 U.S. 377, 382 (1992).

22. See United States v. Playboy Ent. Grp., Inc., 529 U.S. 803, 813 (2000).

23. See Ward, 491 U.S. at 798.

24. Id. at 798–99.

25. See United States v. Alvarez, 567 U.S. 709, 713 (2012) (plurality opinion).

26. See id. at 713 (joining together only four justices in a plurality who determined strict scrutiny should apply to the Stolen Valor Act); id. at 734–35 (Breyer, J., concurring in the judgment) (joining two justices who concurred that the Stolen Valor Act was unconstitutional but applied intermediate scrutiny).

27. Id. at 722.

28. Id. at 731–32 (Breyer, J., concurring in the judgment).

29. Marks v. United States, 430 U.S. 188, 193 (1977) (“When a fragmented Court decides a case and no single rationale explaining the result enjoys the assent of five Justices, the holding of the Court may be viewed as that position taken by those Members who concurred in the judgments on the narrowest grounds.” (internal citation omitted)).

30. Alvarez, 567 U.S. at 720–21 (plurality opinion); id. at 734–35 (Breyer, J., concurring in the judgment).

31. Id. at 720–21, 734–35.

32. Id. at 719, 734.

33. Id. at 722–23.

34. Nils C. Köbis, Barbora Doležalová & Ivan Soraperra, Fooled Twice: People Cannot Detect Deepfakes but Think They Can, ScienceDirect (Nov. 19, 2021), https://www.sciencedirect.com/science/article/pii/S2589004221013353 (finding viewers adopt a “seeing-is-believing” bias towards deepfake videos).

35. See Alvarez, 567 U.S. at 719 (plurality opinion); id. at 734 (Breyer, J., concurring in the judgment).

36. Id. at 719, 734.

37. See New York Times Co. v. Sullivan, 376 U.S. 254, 279 (1964) (recognizing that requiring people to speak truthfully when criticizing the government will inevitably limit speech).

38. See 281 Care Comm. v. Arneson, 766 F.3d 774, 796 (8th Cir. 2014) (“The citizenry, not the government, should be the monitor of falseness in the political arena . . . . [T]hey are equally poised to weigh counterpoints.”).

39. Abrams v. United States, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting).

40. See Sophie Loewenstein, Make America Fake Again?: Banning Deepfakes of Federal Candidates in Political Advertisements Under the First Amendment, 93 Fordham L. Rev. 273, 275 (2024).

41. See Alvarez, 567 U.S. at 727 (“The remedy for speech that is false is speech that is true.”).

42. See Kohls II, 797 F.Supp.3d at 1186–87.

43. Köbis, supra note 30.

44. Id.

45. Simon Clark & Stephan Lewandowsky, Seeing is Believing: The Continued Influence of Known AI-Generated ‘Deepfake’ Videos, ResearchGate (Apr. 2025), https://www.researchgate.net/publication/391287676_Seeing_is_Believing_The_Continued_Influence_of_Known_AI-Generated_%27Deepfake%27_Videos (finding even when warned a video is a deepfake, viewers exhibit a continued reliance on the video’s content).

46. Brendan Nyhan & Jason Reifler, Estimating Fact-checking’s Effects, Am. Press Inst. 9 (Aug. 31, 2016), https://www.americanpressinstitute.org/wp-content/uploads/2016/09/Estimating-Fact-Checkings-Effect.pdf (finding Republicans have less favorable views of fact checking than Democrats).

47. Kohls II, 797 F.Supp.3d at 1186.

48. Mekela Panditharatne, Preparing to Fight AI-Backed Voter Suppression, Brennan Ctr. For Just. (April 16, 2024), https://www.brennancenter.org/our-work/research-reports/preparing-fight-ai-backed-voter-suppression?utm_source=chatgpt.com.

49. Derek B. Johnson, New Hampshire Robocall Kicks Off Era of AI-Enabled Election Disinformation, Cyberscoop (Jan. 24, 2024), https://cyberscoop.com/biden-new-hampshire-robo-call-deepfake/?utm_source=chatgpt.com (noting the robotic and monotone voice of the call was unlikely to fool voters familiar with Biden’s voice but arguing the same may not be true for lesser-known local candidates).

50. Blake Ellis, et. al., How Elderly Dementia Patients Are Unwittingly Fueling Political Campaigns, CNN: CNN Investigates (Oct. 22, 2024), https://www.cnn.com/interactive/2024/10/politics/political-fundraising-elderly-election-invs-dg/ (investigating how elderly people, and those experiencing cognitive decline or dementia, have been misled by political texts asking for donations, leading to them collectively losing millions of dollars).

51. See Kohls II, 797 F.Supp.3d at 1186–87.

52. See id.

53. Ward, 491 U.S. at 798–99 (quoting United States v. Albertini, 472 U.S. 675, 689 (1985)).

54. Id. at 800.

55. See Alvarez, 567 U.S. at 730 (Breyer, J., concurring in the judgment); see also United States v. Mackey, 652 F.Supp.3d 309, 347 (E.D.N.Y. 2023) (applying Alvarez’s intermediate scrutiny standard to false speech).

56. Ward, 491 U.S. at 798–99.

57. See Williams-Yulee v. Fla. Bar, 575 U.S. 433, 457 (2015) (upholding a law prohibiting judges from individually soliciting funds).

58. McIntyre v. Ohio Elections Comm’n, 514 U.S. 334, 349 (1995).

59. See Burson v. Freeman, 504 U.S. 191, 199 (1992).

60. See Ward, 491 U.S. at 800 (finding intermediate scrutiny does not require the regulation to use the least restrictive means of attaining the government interest, but it should not be significantly broader than needed.); see also Alvarez, 567 U.S. at 730 (Breyer, J., concurring in the judgment) (considering the proportional fit of the regulation to the government’s interest).

61. See Playboy Ent. Grp., Inc., 529 U.S. at 813.

62. See Turner Broad. Sys., Inc. v. Fed. Commc’ns Comm’n, 512 U.S. 622, 662 (1994).

63. See Alvarez, 567 U.S. at 737–38 (Breyer, J., concurring in the judgment).

64. Id.