by Elysia Wu*
The Fourth Amendment’s protections against unreasonable searches and seizures were developed when law enforcement investigations relied on publicly exposed information and relatively simple, localized data-collection technologies. The rapid adoption of artificial intelligence (“AI”) in law enforcement, however, places increasing strain on these traditional Fourth Amendment protections. Predictive policing software, facial recognition systems, and other algorithmic tools can generate detailed profiles of individuals without a warrant or individualized suspicion, enabling wide-scale police monitoring in a continuous, data-driven way. Treating AI tools as functionally equivalent to traditional policing techniques risks eroding constitutional safeguards and exacerbating existing racial disparities in the criminal legal system. To avoid these harms and ensure that the adoption of AI by law enforcement does not erode the Fourth Amendment’s core protections, courts must find that AI surveillance requires a warrant due to its scale, continuity, and predictive inferences.
The Fourth Amendment’s protections against unreasonable searches and seizures were established in an era when surveillance involved physical trespass, human observation, and limited technological tools.1 But modern technologies have led the Supreme Court to adapt Fourth Amendment doctrine in recognition of new law enforcement surveillance capacities.2 Recently, courts have begun to grapple with the constitutional implications of advanced surveillance methods, particularly as surveillance moves from discrete monitoring to continuous, population-wide data collection.3 Although the Supreme Court has yet to address the emergence of artificial intelligence (“AI”), its rapid evolution and adoption by law enforcement indicate that it will be at the forefront of Fourth Amendment litigation in the coming years. While early Fourth Amendment decisions might indicate that public information collected by AI is not constitutionally protected,4 AI’s potential to intrude on core privacy interests and produce discriminatory enforcement of criminal law renders its use a “search” subject to the Fourth Amendment’s warrant requirement.
Today, police forces increasingly deploy AI as an investigative aid, including predictive policing software that takes traditional “hotspot policing”—where officers manually examine crime patterns to forecast where offenses are likely to occur—and modernizes it, as well as AI-powered systems that analyze video footage and social media, detecting suspicious behavior or potential threats in real time.5 Cities are also increasingly investing in AI-powered tools such as gunshot detection.6 These tools permit the collection and analysis of immense quantities of data, allowing law enforcement to generate detailed profiles of individuals and communities without individualized suspicion or judicial oversight.7 The appeal of AI is clear: a recent study found that smart technologies like AI can help cities reduce crime by 30 to 40 percent and cut emergency response times by 20 to 35 percent.8 These figures underscore the practical promise of AI-powered policing and help explain why law enforcement agencies around the country are eager to adopt such tools. But the very features that make AI so attractive to police—its efficiency, comprehensiveness, and predictive capability—also raise core constitutional concerns regarding liberty, privacy, and equality under the law.
AI surveillance differs from conventional methods of policing in that it is continuous, overly broad, and predictive.9 AI systems do not merely enhance efficiency; they also carry the risk of amplifying racial bias, as algorithmic outputs and predictions often reflect historical patterns of over-policing in marginalized communities.10 Thus, the searches resulting from AI surveillance threaten to replicate the generalized, arbitrary, and suspicionless searches the Fourth Amendment was designed to prevent.11 Given AI’s widespread adoption by law enforcement agencies, courts may soon confront a pressing question: at what point does continuous, data-driven, population-wide surveillance constitute a search requiring a warrant under the Constitution?
The Supreme Court’s jurisprudence supplies the starting point for any attempt to answer this question. The modern law of the Fourth Amendment begins with Katz v. United States, which famously held that “the Fourth Amendment protects people, not places.”12 In Katz, federal agents attached an electronic listening device to the outside of a public phone booth and recorded the defendant’s conversation.13 Although the government insisted no constitutional violation had occurred because the device never penetrated the booth’s interior, the Court rejected a purely property and trespass-based analysis, where Fourth Amendment protection turns on trespass under local property law.14 Instead, the Court grounded its decision in the reasonable-expectation-of-privacy framework, finding that information a person knowingly exposes to the public falls outside Fourth Amendment protection.15 Justice Harlan’s concurrence set forth the now-canonical test: a search occurs when the government intrudes on an expectation of privacy that society is prepared to recognize as reasonable.16 Katz thus marked a shift from a Fourth Amendment regime tethered to trespass doctrine toward one oriented around privacy expectations in the digital age, although the doctrine remained limited in its anticipation of technological progress, as evolving technologies continually recast the very notion of what counts as exposure to the public. This doctrinal pivot remains critical because AI-powered surveillance, unlike physical entry, operates invisibly, continuously, and at a massive scale, raising precisely the kind of privacy concerns that Katz anticipated but could not fully resolve. While Katz acknowledged that the government could impermissibly intrude on a single individual’s expectation of privacy through discrete collection of intangible information, like a telephone conversion from inside a phone booth, it did not grapple with population-wide, continuous monitoring or the mass aggregation of disparate data sources made possible by AI.17
The Supreme Court’s early encounters with location-tracking technologies in the 1980s were quick to illustrate the limits of the Katz framework. In United States v. Knotts, law enforcement agents placed a beeper inside a container of chloroform purchased by the defendant and then tracked the container as the defendant transported it along public highways.18 The Court held that no search had occurred, explaining that a person traveling on public roads has no reasonable expectation of privacy in their movements, since those movements could have been observed by anyone following behind.19 This reasoning, which effectively blessed the use of rudimentary tracking technology, has had enduring influence. Police departments and courts continue to invoke the principle that what is exposed to the public view is not protected.20 On its face, this logic suggests that AI surveillance relying on publicly available information would also fall outside Fourth Amendment protection. However, courts should recognize that the scale and continuity of observation enabled by AI surveillance creates a comprehensive mosaic of personal life that no human observer could hope to replicate,21 such that a straightforward application of Knotts would undermine core Fourth Amendment protections. Therefore, this extension of the Katz framework, focused on individualized expectations of privacy and public exposure, fails to capture the new challenges of aggregated information and predictive capacities posed by AI.
Together, Katz and Knotts reflect the Supreme Court’s early attempts to grapple with the relationship between technology and privacy. They establish an essential doctrinal tension that has continued to shape Fourth Amendment analysis, even as new technologies complicate the framework. Individuals generally have no reasonable expectation of privacy in information they expose to the public, and existing Fourth Amendment principles were accordingly built around protecting discrete, intimate spaces and communications.22
This tension sharpened in Kyllo v. United States, where the Supreme Court confronted law enforcement’s use of a thermal imager to detect heat patterns inside a home.23 The Court held that when police use a device not in “general public use” to obtain information from the home without physical intrusion, such surveillance constitutes a search and is presumptively unreasonable without a warrant.24 Although Kyllo based its logic upon trespass principles and the privacy afforded to people within their homes, its logic resonates more broadly. Technologies, like AI surveillance, that extend government observation beyond the limits of ordinary perception, enable the government to infer information about individuals that would be impossible to obtain through conventional observation, thus triggering Fourth Amendment concerns. This development set the stage for the Court’s encounters with GPS trackers, cell-site location data, and smartphones, which illustrate the judiciary’s growing recognition that new technologies demand novel applications of Fourth Amendment principles.25
Recent Supreme Court decisions illustrate a clear trend: modern surveillance raises constitutional stakes that qualitatively differ from traditional investigative techniques. In United States v. Jones, the Court held that installing a GPS device on a suspect’s vehicle constituted a search under the Fourth Amendment.26 The majority’s reasoning rested on the idea that Katz had supplemented, rather than displaced, the traditional trespass test, thereby characterizing the physical attachment of the device to the vehicle as sufficient to constitute a search.27 Jones also revealed the Court’s growing unease with technological monitoring. In a highly influential concurrence, Justice Sotomayor questioned whether the Katz framework should apply to prolonged GPS surveillance, emphasizing that continuous monitoring can invade expectations of privacy even without physical intrusion “by shaping the evolution of societal privacy expectations.”28 Subsequently, in Carpenter v. United States, the Court took up Justice Sotomayor’s argument and ruled that accessing months of cell-site location information (CSLI)—effectively tracking all of a person’s movements over a significant period of time—is itself a search that requires a warrant, even though a person’s movements in public are technically exposed to observation.29 The Court recognized that cell phones are a pervasive and indispensable part of daily life, and location data is generated automatically without any affirmative act by the user.30 The lack of an affirmative act by the individual undercuts the argument that using digital services constitutes voluntary exposure, which under Katz would have implied a forfeiture of privacy rights.31
In contrast to Jones, Carpenter squarely addressed the privacy implications of modern technology without relying on a trespassing analogue, marking a doctrinal shift toward recognizing surveillance as constitutionally significant even in the absence of physical intrusion.32 The privacy risks contemplated in Carpenter track closely to those produced by AI surveillance since both CSLI data and AI surveillance involve continuous aggregation and analysis of data across populations, most of which is generated passively. Driving this point home, in Riley v. California, the Supreme Court treated smartphones as fundamentally distinct from other personal effects, noting that the sheer quantity and sensitivity of data they contain demands heightened constitutional protection.33 Similarly, AI surveillance captures information that is highly personal and can result in detailed, predictive portraits of individuals, raising analogous privacy risks to those present in smartphone searches.
Taken together, these recent decisions signal that courts are increasingly sensitive to the scope, duration, and intimacy of modern surveillance. Whether through GPS, CSLI, or smartphones, the pattern is clear: technologies that enable continuous, detailed monitoring implicate privacy interests in ways that traditional methods of policing simply do not. Even so, these cases largely concern the retrospective collection of existing data. AI surveillance goes a step further by using historical and real-time data to generate predictions about future behavior. This enables a form of surveillance that current Fourth Amendment doctrine does not directly address. The Court’s evolving framework, however, provides a lens for assessing AI-driven policing. As surveillance tools become more pervasive and data-rich, constitutional analysis must account not only for what is being monitored, but also for the cumulative depth and duration of that monitoring and the nature of the predictive inferences it enables, especially when those predictions risk perpetuating racial bias by transforming historical policing disparities into algorithmic certainties.34
Recent decisions in the Circuit Courts of Appeals illustrate the dual concerns of privacy and discrimination raised by AI surveillance and provide models for addressing each. In Leaders of a Beautiful Struggle v. Baltimore Police Department, the Fourth Circuit struck down the Baltimore Police Department’s implementation of an aerial surveillance program.35 While not 24/7, this program’s prolonged daytime tracking was continuous in that it enabled police to deduce individuals’ movements, which intruded on constitutional rights by invading the reasonable “expectation of privacy that individuals have in the whole of their movements” given its capacity to reveal personal habits and patterns.36 Like the aerial surveillance in Leaders of a Beautiful Struggle, AI surveillance is particularly pernicious because it can reveal intimate details about individuals by collecting and analyzing large patterns of behavior across populations.37 Moreover, in United States v. Curry, the Fourth Circuit recognized that predictive policing systems carry the risk of overreach and discrimination, with one concurrence questioning whether predictive policing could amount to a “high-tech version of racial profiling.”38 Another concurrence emphasized that any computer program or algorithm can only be as reliable as the data fed into it, highlighting how biased or incomplete datasets can produce discriminatory outcomes.39 Together, these cases demonstrate that some circuit courts have begun to recognize the deep constitutional concerns raised by AI surveillance. When surveillance technologies enable continuous, population-wide monitoring with the potential to reveal the whole of an individual’s movements or generate predictive insights into private life, use of the technology constitute a Fourth Amendment search requiring a warrant.
As recognized by the Fourth Circuit in Curry, AI surveillance must also be treated as constitutionally distinct due to its capacity to replicate and entrench racial bias in law enforcement.40 Longstanding precedent makes it clear that even facially neutral policing practices cannot be used to disguise or perpetuate discriminatory enforcement.41 In Whren v. United States, the Supreme Court permitted traffic stops based on probable cause for a traffic violation but nonetheless took care to emphasize that selective enforcement based on race would violate the Equal Protection Clause.42 Likewise, in United States v. Brignoni-Ponce, the Court struck down vehicle stops based solely on occupants’ apparent Mexican ancestry, holding that reliance on race or ethnicity as the sole factor for suspicion is unconstitutional.43 AI surveillance threatens this constitutional commitment that policing cannot operate as a vehicle for systemic discrimination. Algorithmic systems trained on historical policing data, which reflects decades of racially disparate enforcement patterns, embed bias within their data-driven predictions,44 which effectively automate the discretionary judgments that Whren and Brignoni-Ponce expressly denounced.45 The risk of AI will legitimize discriminatory, suspicionless searches under the guise of technological objectivity provides yet another reason to require law enforcement officers to obtain a warrant before relying on AI surveillance.
Despite these concerns, proponents of AI-driven policing highlight potential operational benefits, such as crime reduction and faster emergency responses.46 Algorithms can process massive amounts of data in real time, far beyond human capacity, allowing law enforcement to allocate resources more effectively and respond quickly to emerging threats.47 As a result, AI could help reduce crime rates while saving time and taxpayer money.48 Although efficiency is appealing, constitutional rights cannot be subordinated to the aims of cost savings or expediency. Judicial history has already shown that efficiency in policing often comes at the expense of fairness, shortcutting procedural protections or disproportionately targeting vulnerable groups.49 Reliance on an AI system that rapidly processes biased data without the protection of a judicially granted warrant will only scale discrimination more efficiently, producing harm even as it attempts to prevent it.
AI surveillance represents a transformative frontier in policing, both offering potential benefits and raising profound constitutional and ethical questions. Courts cannot treat AI monitoring as equivalent to traditional law enforcement methods. Cases like Carpenter and Leaders of a Beautiful Struggle illustrate that the scale, duration, and intrusiveness of modern surveillance demand careful scrutiny. The law must account for cumulative policy intrusions arising from continuous, population-wide and opaque monitoring systems that risk reinforcing racial profiling under a technological guise. By balancing technological innovation with Fourth Amendment warrant requirements, lawmakers and courts can ensure that AI respects individuals’ reasonable expectation of privacy in the entirety of their movements and conduct, even when exposed to the public.
* Elysia Wu is a J.D. Candidate (2026) at New York University School of Law. This Contribution is a commentary on the use of artificial intelligence in policing as it relates to the Fourth Amendment. This Contribution distills one side of the argument, and the views expressed herein do not necessarily represent the author’s views.
1. Sean D. Theuson, Fourth Amendment Search – Fuzzy Shades of Gray: The New Bright-Line Rule in Determining When the Use of Technology Constitutes a Search – Kyllo v. United States, 121 S. Ct. 2038 (2001), 2 Wyo. L. Rev. 169, 176 (2002).
2. See, e.g., Carpenter v. United States, 585 U.S. 296, 305 (2018) (holding that warrantless acquisition of cell-site records violated the Fourth Amendment).
3. See, e.g., Leaders of a Beautiful Struggle v. Balt. Police Dep’t, 2 F.4th 330, 333 (4th Cir. 2021) (finding that an aerial surveillance program’s collection and use of data constituted a search under the Fourth Amendment).
4. Katz v. United States, 389 U.S. 347, 351 (1967).
5. David Ly, Predictive Policing: Myth Busting And What We Can Expect Of AI-Powered Law Enforcement, Forbes (Oct. 15, 2024), https://www.forbes.com/councils/forbestechcouncil/2024/10/15/predictive-policing-myth-busting-and-what-we-can-expect-of-ai-powered-law-enforcement/.
6. See Surveillance and Predictive Policing Through AI, Deloitte (n.d.), https://www.deloitte.com/global/en/Industries/government-public/perspectives/urban-future-with-a-purpose/surveillance-and-predictive-policing-through-ai.html (last visited Nov. 20, 2025).
7. See Elizabeth E. Joh, Police Technology Experiments, 125 Colum. L. Rev. F. 1, 5 (2025), https://columbialawreview.org/wp-content/uploads/2025/02/January-2025-Forum-Joh.pdf.
8. See Deloitte, supra note 6.
9. Id.
10. See National Association for the Advancement of Colored People, Artificial Intelligence in Predictive Policing Issue Brief, NAACP (n.d.), https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief (last visited Nov. 23, 2025).
11. See, e.g., City of Indianapolis v. Edmond, 531 U.S. 32, 37, 47 (2000) (holding that a vehicle checkpoint program violates the Fourth Amendment if officers lack individualized suspicion of wrongdoing).
12. Katz, 389 U.S. at 351.
13. Id. at 348.
14. Id. at 351, 353.
15. Id. at 351.
16. Katz, 389 U.S. at 361–62 (Harlan, J., concurring).
17. Id.
18. United States v. Knotts, 460 U.S. 276, 277 (1983).
19. Id. at 281–82.
20. See, e.g., State v. E.D.R., 959 So. 2d 1225, 1227 (Fla. Dist. Ct. App. 2007) (state argued contraband was not protected because officers could see it via an unenclosed, unscreened porch); United States v. Anderson, No. 20 CR 769, 2021 U.S. Dist. LEXIS 106278, at *3 (N.D. Ill. June 7, 2021) (officers observed defendant’s activities at the curtilage of an apartment building).
21. See Commonwealth v. McCarthy, 142 N.E.3d 1090, 1102–03 (2020) (discussing “mosaic theory,” which recognizes that accumulation of data about a person’s public conduct may constitute a Fourth Amendment search).
22. See Katz, 389 U.S. at 361; Knotts, 460 U.S. at 281–82.
23. Kyllo v. United States, 533 U.S. 27, 29 (2001).
24. Id. at 34.
25. See generally United States v. Jones, 565 U.S. 400, 404 (2012) (holding that placing a GPS device on a vehicle and tracking its movements on public roads amounts to a Fourth Amendment search); Carpenter, 585 U.S. at 316 (holding that acquisition of cell-site location data constitutes a search under the Fourth Amendment).
26. Jones, 565 U.S. at 404, 409.
27. Id. at 409.
28. Jones, 565 U.S. at 415 (Sotomayor, J., concurring).
29. Carpenter, 585 U.S. at 316.
30. Id. at 315.
31. Id.
32. Id. at 315, 383.
33. Riley v. California, 573 U.S. 373, 395 (2014).
34. See P. Jeffrey Brantingham, The Logic of Data Bias and Its Impact on Place-Based Predictive Policing, 15 Ohio St. J. Crim. L. 473, 474 (2017–2018) (noting that predictive policing can exacerbate racial biases, which may lead to increasingly disproportionate arrest rates).
35. Leaders of a Beautiful Struggle, 2 F.4th at 333, 341.
36. Id. at 341, 343, 347.
37. See Deloitte, supra note 6.
38. United States v. Curry, 965 F.3d 313, 334 (4th Cir. 2020) (Gregory, J., concurring).
39. Curry, 965 F.3d at 345 (Thacker, J., concurring).
40. Id.
41. See generally Whren v. United States, 517 U.S. 806, 813 (1996).
42. Id.
43. United States v. Brignoni-Ponce, 422 U.S. 873, 885–87 (1975).
44. See Brantingham, supra note 34, at 474.
45. See NAACP, supra note 10.
46. See Deloitte, supra note 6.
47. See id.
48. See id.
49. See generally Russell L. Jones, Terry v. Ohio: Its Failure, Immoral Progeny, and Racial Profiling, 54 Idaho L. Rev. 511 (2018) (describing how interpretations of Terry v. Ohio have legitimized racial profiling).