by Emmett Tabor*

At times, administrative agencies encounter an overwhelming volume of public comments during the rulemaking process. The review of these comments, as mandated by the Administrative Procedure Act (APA), can delay the implementation of regulations and place a significant burden on resource-scarce agencies. The integration of Artificial Intelligence (AI) and Machine Learning (ML) tools into the comment review process offers a promising solution to expedite notice-and-comment rulemaking. At the same time, the use of these “black box” solutions may trigger legal challenges for potentially violating the procedural requirements of the APA. This Contribution explores three anticipated legal considerations for agency use of AI/ML tools in the review of public comment: (1) disclosure requirements under APA section 553(b); (2) obligations to “consider” public comments under APA section 553(c); and (3) the rule of prejudicial error under APA section 706. Despite these concerns, this Contribution argues that incorporation of AI/ML tools into the agency comment review process is compatible with the APA.


Public participation in agency notice-and-comment rulemaking is one of the fundamental democratic principles underpinning the American federal regulatory process.1

At the dawn of the digital age, the Bush administration began a major effort to integrate information technology into the commenting process to enhance accessibility and efficiency, establishing the eRulemaking Program through the E-Government Act of 2002.2 These efforts were further advanced under the Obama administration, which released Executive Order 13563 in 2011 directing agencies to provide online access to the rulemaking docket.3 Today, the public can participate and impact proposed federal regulations through direct comments on agency portals and Regulations.gov.4 As a result of increased public access to proposed regulations, however, the U.S. Government Accountability Office (GAO) has noted that a number of high-profile rulemakings have received extremely large numbers of comments.5 For instance, the Environmental Protection Agency (EPA) received more than 4 million comments on its 2014 rulemaking on greenhouse gas emissions.6 Likewise, the public comment period for the Federal Communications Commission (FCC)’s 2017 Restoring Internet Freedom rulemaking generated more than 22 million comments.7

Voluminous commentary can impose significant burdens on agencies engaged in the rulemaking process. In a 2019 report, the Senate Subcommittee on Regulatory Affairs and Federal Management recognized that commenting systems at times have become “fora for profane, threatening, and abusive commentary.”8 The report further described how electronic regulatory dockets have been disrupted by commenters submitting excessive materials and data in order to overload the system and disrupt the comment period.9 Likewise, duplicative “form” comments and computer-generated “bot” comments make it difficult for agencies to review the docket for substantive information by drowning out meaningful comments.10 The Senate Subcommittee’s report concluded that the “federal agencies that host these platforms have not yet found ways to cope with these abuses.”11 As such, these issues continue to risk delaying important government action, wasting scarce agency resources, and distorting the regulatory process.12

Given the scale of obstacles hindering effective comment management by administrative agencies, notice-and-comment rulemaking is ripe for gained efficiencies from the use of new technologies, such as Artificial Intelligence (AI) and Machine Learning (ML) tools.13 Already, prominent scholars have identified specific ways in which AI/ML tools can “enhance the quality of the rulemaking process” over manual human review.14 These proposals include the integration of AI/ML tools into existing agency filing systems to process and analyze public comment.15 AI/ML tools present promising advancements in an agency’s ability to summarize significant amounts of text, remove irrelevant comments, and identify key arguments raised in support or opposition of a rulemaking.16

Despite these potential efficiencies, prominent voices within the legal community have cautioned against increased reliance on AI tools, noting the potential for AI to “dehumanize” aspects of the law.17 Given these concerns, it is reasonable to anticipate that future efforts by government agencies to integrate AI/ML tools into the notice-and-comment rulemaking process will face legal challenges.

Specifically, objections may arise through the Administrative Procedure Act (APA) section 553, which requires notice, disclosure, and commenting opportunities for all notice-and-comment rulemaking.18 An agency’s failure to comply with these procedural requirements is cause for a reviewing court to hold unlawful and set aside an agency action as arbitrary and capricious under APA section 706.19 However, beyond the textual requirements of the APA, federal agencies have significant discretion to determine their own rulemaking procedures.20 Any judicial review of agency procedure is thus limited to evaluating the agency’s compliance with the specific statutory requirements of APA section 553(b), which mandates proper notice and opportunity to participate in the rulemaking process, and section 553(c), which requires the agency to provide a concise general statement of the rule’s basis and purpose for agency rulemaking.21

The first potential issue raised by the integration of AI/ML tools is the inability of agencies to fully disclose the underlying algorithms, which are notoriously opaque and highly protected by the tool developers, as well as any generated comment summaries.22 Subsection (b) of APA section 553 requires that published notice of a proposed rulemaking include “either the terms or substance of the proposed rule or a description of the subjects and issues involved.”23 Courts have clarified that a rulemaking violates subsection 553(b) only when lack of proper notice of the proposed rule fails to afford interested parties a reasonable opportunity to participate in the rulemaking process.24 Therefore, so long as interested parties are provided with the factual detail and rationale necessary to provide meaningful commentary, the requirements of subsection 553(b) are met.25 The core objective of section 553 is to empower anyone to comment on all the facts and ideas that the agency considered, but only as far as practical.26

A trio of D.C. Circuit cases form the leading authority on section 553(b)’s disclosure requirements in the context of technology. In 1982’s Connecticut Light & Power v. NRC, the D.C. Circuit upheld the Nuclear Regulatory Commission’s fire protection program, but noted that the agency came perilously close to foreclosing any useful public participation due to its failure to disclose certain technical papers relied upon in the decision to implement the rule.27 In 1984’s NARUC v. FCC, the D.C. Circuit found that the FCC’s failure to initially disclose an internal staff economic study that served as a technical basis for the rule may have fatally tainted the agency’s decision-making process, but ultimately upheld the rule because parties were provided with the study during later reconsiderations.28 Most recently, in 2008’s American Radio Relay v. FCC, the D.C. Circuit remanded an FCC rule regulating broadband frequencies because, in its disclosures, the agency had redacted portions of its scientific studies involving internal staff analysis.29 Taken together, these three cases mandate agency disclosure of materials concerning the “technical basis” of a proposed rulemaking. This reading flows logically from the APA’s intended purpose of facilitating democratic engagement with the technical materials that explain the agency’s decision to promulgate the rule in the first place.30 The APA’s notice-and-comment requirements are intended to safeguard against an agency concealing portions of the technical basis for a proposed rule, foreclosing the ability of the public to meaningfully comment.31

But AI/ML summaries of public comments do not constitute a technical “study or data” related to the decision to implement the rule.32 In the context of internal agency functioning, these technologies are more akin to administrative tools, such as Microsoft Office products already commonly in use, because they read and summarize language in the same way a human reader would. Moreover, the APA does not impose any requirements on disclosure of internal review materials after promulgation of the substantive rule itself. The agency’s internal process of comment review is temporally distinct from the “terms or substance” of the rule when the agency originally released it for comment.33 Even if an agency does make substantive updates to a rule following public comment, disclosure of its AI/ML tools would not impact the public’s ability to meaningfully engage in a second round of commentary, since all comments from the initial round are already publicly available and the agency would have been separately required to respond to significant comments from the first round under section 553(c).34 As such, the APA cannot be read to impose disclosure requirements as to internal technical procedures for commentary review. Finally, any court faced with interpreting section 553(b)’s requirements would likely consider the practical consequences of such a ruling. Requiring full disclosure as to how federal agencies internally review and analyze information would cause impractical and unnecessary delays in rulemaking.35 Such a precedent would subject every future agency action to additional rounds of review and re-review as the public evaluates marginal summaries of material already available to the public.

The second potential issue raised by the integration of AI/ML tools is whether the APA itself requires human review of every comment.36 Subsection (c) of APA section 553 requires that agencies afford all interested parties an “opportunity to participate in the rule making through submission of written data, views, or arguments.”37 After “consideration of the relevant matters presented,” an agency must adopt “a concise general statement of their basis and purpose.”38 The Supreme Court has interpreted this to require an agency to consider and respond to significant comments received during the period for public comment.39

An agency’s failure to adequately identify and address substantial problems raised by commentors is grounds for a reviewing court to hold unlawful and set aside an agency action under APA section 706 as arbitrary and capricious.40 But the APA is generally interpreted as only requiring agency response to comments that are so material that, if true, they would require changing the agency’s rule.41 Failure to respond to comments is significant only when it demonstrates that the agency’s decision was not based on consideration of relevant factors.42 As such, an agency may ignore comments that are purely speculative and fail to disclose any factual or policy basis.43

In light of these legal standards, human review of every comment is not required and the use of AI/ML tools to synthesize the material public comments and filter out those with no factual or policy basis is appropriate. To the extent that an agency may “consider” material comments by having a summer intern review and summarize them, equal “consideration” will likely occur when an AI tool provides an equivalent, if not more thorough, summary report. Moreover, courts have specifically emphasized that agencies have no obligation to weigh the number of comments for or against a rulemaking, only the substance of the comments.44 Thus, as long as agencies accurately review material commentary, there is no limitation on how this review is accomplished—be it with AI/ML tools or human review.

A third consideration raised by the integration of AI/ML tools is whether the use of these tools would rise to the level of “prejudicial error.”45 In the event that these tools do miss a material comment at some point, it is unlikely that this miss would amount to the level of harm necessary for a procedural failure. Specific challenges to the use of AI/ML tools in agency comment review are evaluated under APA section 706, which instructs courts that “due account shall be taken of the rule of prejudicial error.”46 As such, to vacate an agency action for a procedural deficiency, a party must demonstrate that they suffered “prejudice” from the agency’s procedural failure.47 Thus, agency rules cannot be vacated unless the AI/ML tool resulted in more than “harmless error.”48 To do so, any challenging party would likely need to demonstrate, empirically, that the administrative agency’s specific AI/ML tool  had such a high false-negative rate that its use was “prejudicial” to the commenting parties’ ability to meaningfully participate in the rulemaking proceedings. In other words, the test of procedural soundness is whether the tools are so flawed that significant criticisms of a proposed rule were overlooked during the rulemaking process by the agency.49 This likelihood of an AI/ML tool missing or misinterpreting a material comment is a quantifiable risk than can be identified through empirical testing, and agencies confident in the accuracy of their tools will be free from challenge in this regard.

Further study is needed to definitively determine the efficacy and accuracy of specific AI/ML tools but, from a legal perspective, their use is compatible with the procedural requirements of the APA. Moreover, as a policy matter, the overwhelming benefits of AI/ML tools weigh strongly in favor of presumptive use to assist agency review of public comment. While reliance on AI in any context certainly poses risks, specific application to the review and summarization of text comments through data crawling and natural language processing is sufficiently cabined to avoid major concerns like hallucinations.50 Computer-assisted review is already commonly used in private practice to accelerate discovery and reduce legal costs, with limited drawbacks.51

Integration of AI/ML tools also aligns with President Biden’s October 30, 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which dedicated an entire section to advancing the federal government’s use of AI.52 Further guidance from the Office of Management and Budget on March 28, 2024, which provided guidelines for agency use of artificial intelligence, likewise left the door open for use of AI/ML tools to streamline the regulatory rulemaking process.53 The agencies’ integration of these tools is an incredible opportunity to harness technological innovation for the promotion of good governance.


* Emmett Tabor is a J.D. Candidate (2025) at New York University School of Law. This Contribution is a commentary on the problem at the 2024 National Telecommunications and Technology Moot Court Tournament, hosted by Catholic University Columbus School of Law. This Contribution distills one side of the argument, and the views expressed herein do not necessarily represent those of the author.

1. See generally Donald J. Kochan, The Commenting Power: Agency Accountability Through Public Participation, 70 Okla. L. Rev. 601 (2018)  (describing the public participation requirement as a two-way obligation to engage in dialogue as a critical check on agency power).

2. See Steven Balla & Benjamin Daniels, Information Technology and Public Commenting on Agency Regulations, 1 Regul. & Governance 46, 49 (2007), https://doi.org/10.1111/j.1748-5991.2007.00005.x (“[A]ccording to provisions in the E-Government Act of 2002, a government-wide docket system, housed at http://www.regulations.gov, will one day operate as the definitive clearinghouse for information about the activities of all federal rulemaking entities.”); About the eRulemaking Initiative, Regulations.gov (last visited Jul. 13, 2025), https://www.regulations.gov/about; 44 U.S.C. §§ 3601-16 (2002).

3. See Exec. Order No. 13,563, 76 Fed. Reg. 3821, 3822 (Jan. 18, 2011) (directing agencies to “provide, for both proposed and final rules, timely online access to the rulemaking docket on regulations.gov, including relevant scientific and technical findings, in an open format that can be easily searched and downloaded”).

4. See Steven J. Balla et al., Responding to Mass, Computer-Generated, and Malattributed Comments, 74 Admin. L. Rev. 95, 103–4 (2022), https://ssrn.com/abstract=3989307 (“[M]any but not all independent agencies also use Regulations.gov for their rulemakings; those that do not prominently include the FCC and the Securities and Exchange Commission.”).

5. See U.S. Gov’t Accountability Off. GAO-21-103181, Selected Agencies Should Fully Describe Public Comment Data and their Limitations 2 (2021) (“In recent years, some high-profile rulemakings received extremely large numbers of comments.”); see also Balla et al., supra note 4, at 104–8.

6. See Carbon Pollution Emission Guidelines for Existing Stationary Sources: Electric Utility Generating Units, 80 Fed. Reg. 64662, 64663 (Oct. 23, 2015) (to be codified at 40 C.F.R. pt. 60) (“This final rule is the result of unprecedented outreach and engagement with states, tribes, utilities, and other stakeholders, with stakeholders providing more than 4.3 million comments on the proposed rule.”).

7. See Restoring Internet Freedom, 83 Fed. Reg. 7852, 7913 (Feb. 22, 2018) (to be codified at 47 C.F.R. pts. 1, 8, 20) (“The Internet Freedom NPRM sought comment on consumer harm in a variety of contexts and, in response, received over 22 million comments discussing consumers’ view of the Title II Order, including any harm that may or may not have occurred under its rules.”).

8. Staff of S. Permanent Subcomm. on Investigations, Comm. on Homeland Sec. & Gov’t Affs., 116th Cong., Rep. on Abuses of the Federal Notice-and-Comment Rulemaking Process 1 (2019), https://www.manufacturedhomepronews.com/wp-content/uploads/2023/04/2019-10-24Senate-PSI-Staff-Report-Abuses-of-the-Federal-Notice-and-Comment-Rulemaking-Process.pdf [hereinafter “2019 Senate Staff Report”].

9. See id. at 25 (noting that comments include “massive amounts of data irrelevant to the topic at hand”).

10. See Balla et al., supra note 4, at 97 (noting that only 6% of the comments submitted for the FCC’s Restoring Internet Freedom rulemaking were unique). See generally Fake It Till They Make It: How Bad Actors Use Astroturfing to Manipulate Regulators, Disenfranchise Consumers, and Subvert the Rulemaking Process: Hearing Before the H. Fin. Servs. Comm., 116th Cong. (2020) (statements of Barlett Naylor, Beth Noveck, Paulina Gonzalez-Brito, Seto Bagdoyan, and Steven Balla), https://www.congress.gov/event/116th-congress/house-event/110461 (describing mass comment campaigns).

11. 2019 Senate Staff Report, supra note 8, at 1.

12. See Catherine M. Sharkey, Algorithmic Retrospective Review of Agency Rules 22 n.137 (May 3, 2023) (report to the Admin. Conf. of the U.S.) (finding that responding to a high volume of comments can costs agencies $200,000 per rule in labor and technology resources).

13. See Stephen M. Johnson, Rulemaking 3.0: Incorporating AI and ChatGPT Into Notice and Comment Rulemaking, 88 Mo. L. Rev. (forthcoming Fall 2024) (manuscript at 56–57) (“Technological innovations since the turn of the century have created opportunities to increase public participation in notice and comment rulemaking, increase the efficiency of the process, and increase the quality of the rules adopted by agencies . . . Artificial intelligence and ChatGPT are the latest tech trends to create opportunities to transform notice and comment rulemaking.”).

14. See David F. Engstrom et al., Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies 60 (February 2020) (report to the Admin. Conf. of the U.S.) (“AI/ML tools can help to identify duplicates and form letters, summarize overall comment sentiment, and identify relevant comments that could save significant resources and enhance the quality of the rulemaking process.”).

15. Id.

16. See id. (describing how AI/ML tools can help to “streamline agency processing and analysis of public comments” through their capacity “to identify duplicates and form letters, summarize overall comment sentiment, and identify relevant comments that could save significant resources and enhance the quality of the rulemaking process”).

17. See John Roberts, 2023 Year-End Report on the Federal Judiciary 5 (2023), https://www.supremecourt.gov/publicinfo/year-end/2023year-endreport.pdf (emphasizing that AI “risks invading privacy interests and dehumanizing the law”); ABA Task Force on Law and Artificial Intelligence, American Bar Association (last visited Jul. 13, 24), https://www.americanbar.org/groups/centers_commissions/center-for-innovation/artificial-intelligence/ai-challenges-generative-ai/ (“It is important for the legal profession to be aware of the safety concerns and other risks associated with generative AI, including privacy and cybersecurity risks, the generation of inaccurate content, and copyright infringement and other intellectual property issues.”).

18. See generally 5 U.S.C. § 553.

19. 5 U.S.C. § 706; Citizens to Pres. Overton Park, Inc. v. Volpe, 401 U.S. 402, 413–14 (1971).

20. See Vermont Yankee Nuclear Power Corp. v. NRDC, 435 U.S. 519, 548 (1978) (holding that a court may not overturn a rulemaking proceeding on the basis of an agency’s procedural devices so long as the agency employed at least the statutory minima provided in the APA).

21. 5 U.S.C. § 553; Perez v. Mortg. Bankers Ass’n, 575 U.S. 92, 96 (2015).

22. See David Rubenstein, Acquiring Ethical AI, 73 Fla. L. Rev. 747, 778–80 (2021) (discussing transparency concerns in government reliance on AI algorithms, which are often compounded by nondisclosure agreements that prevent publicizing vendor trade secrets).

23. 5 U.S.C. § 553(b)(3).

24. WJG Tel. Co. v. FCC, 675 F.2d 386, 389 (D.C. Cir.1982).

25. See Fla. Power & Light Co. v. Nuclear Regul. Comm’n, 846 F.2d 765, 771 (D.C. Cir. 1988) (“[N]otice must not only give adequate time for comments, but also must provide sufficient factual detail and rationale for the rule to permit interested parties to comment meaningfully.”).

26. See Am. Ass’n of Meat Processors v. Bergland, 460 F. Supp. 279, 282 (D.D.C. 1978) (quoting scholar and APA drafter Kenneth Culp Davis that “a main objective of § 553 procedure should be, as far as practicable, to let anyone comment on all the facts and all ideas that the agency considers”).

27. See Conn. Light & Power Co. v. Nuclear Regul. Comm’n (“NRC”), 673 F.2d 525, 530–31 (D.C. Cir. 1982) (“To allow an agency to play hunt the peanut with technical information, hiding or disguising the information that it employs, is to condone a practice in which the agency treats what should be a genuine interchange as mere bureaucratic sport. An agency commits serious procedural error when it fails to reveal portions of the technical basis for a proposed rule in time to allow for meaningful commentary.”).

28. Nat’l Ass’n of Regul. Util. Comm’rs (“NARUC”) v. FCC, 737 F.2d 1095, 1121 (D.C. Cir. 1984).

29. See Am. Radio Relay League, Inc. v. FCC, 524 F.3d 227, 237 (D.C. Cir. 2008) (finding there is no APA precedent allowing an agency to “cherry-pick” what it discloses from an internal study which it has chosen to rely on in part).

30. See Am. Radio Relay League, Inc., 524 F.3d at 236 (“Construing section 553 of the APA, the court explained long ago that ‘[i]n order to allow for useful criticism, it is especially important for the agency to identify and make available technical studies and data that it has employed in reaching the decisions to propose particular rules.’” (quoting Conn. Light & Power Co., 673 F.2d at 530)).

31. See id. (“Enforcing the APA’s notice and comment requirements ensures that an agency does not ‘fail[] to reveal portions of the technical basis for a proposed rule in time to allow for meaningful commentary . . . .’” (quoting Conn. Light & Power Co., 673 F.2d at 531)).

32. See Conn. Light & Power Co., 673 F.2d at 530 (“In order to allow for useful criticism, it is especially important for the agency to identify and make available technical studies and data that it has employed in reaching the decisions to propose particular rules.”).

33. 5 U.S.C. § 553(b)(3).

34. 5 U.S.C. § 553(c).

35. See Am. Trucking Ass’ns, Inc. v. EPA, 283 F.3d 355, 372 (D.C. Cir. 2002) (recognizing that “requiring agencies to obtain and publicize the data underlying all studies on which they rely would be impractical and unnecessary” (internal quotations omitted)).

36. See Eli Nachmany, Artificial Intelligence, Modernizing Regulatory Review, and the Duty to Respond to Public Comments, Yale J. on Reg., Notice & Comment Blog (May 30, 2023), https://www.yalejreg.com/nc/artificial-intelligence-modernizing-regulatory-review-and-the-duty-to-respond-to-public-comments-by-eli-nachmany/ (describing that, if faced with hypothetical AI-generated summaries of public comments, “[a] reviewing court could not be certain that the agency gave adequate consideration to the comments, even in spite of a record appearing to say that the agency did”).

37. 5 U.S.C. § 553(c).

38. Id.

39. See Perez, 575 U.S. at 96 (“An agency must consider and respond to significant comments received during the period for public comment.”).

40. See, e.g., Business Roundtable. v. SEC, 647 F.3d 1144, 1149 (D.C. Cir. 2011) (finding that the SEC acted arbitrarily by failing to respond to substantial problems raised by commenters); see 5 U.S.C. § 706.

41. See Am. Mining Cong. v. EPA, 907 F.2d 1179, 1187 (D.C. Cir. 1990) (“[I]n assessing the reasoned quality of the agency’s decisions, we are mindful that the notice-and-comment provision of the APA has never been interpreted to require [an] agency to respond to every comment, or to analyse [sic] every issue or alternative raised by comments, no matter how insubstantial. Rather, the agency need respond only to those comments which, if true, . . . would require a change in an agency’s proposed rule.” (internal citations omitted)); Indus. Union Dep’t, AFL-CIO v. Hodgson, 499 F.2d 467, 475 (D.C. Cir. 1974) (“What we are entitled to at all events is a careful identification . . . of the reasons why [an agency] chooses to follow one course rather than another.”).

42. See Thompson v. Clark, 741 F.2d 401, 409 (D.C. Cir. 1984) (“The failure to respond to comments is significant only insofar as it demonstrates that the agency’s decision was not ‘based on a consideration of the relevant factors.’” (internal citations omitted)). Other circuits use similar tests, such as whether an agency’s decision reflects a “reasoned path” discerned from the acts and considerations before it to the decision it reached. See, e.g., Simms v. Nat’l Highway Traffic Safety Admin., 45 F.3d 999, 1004–05 (6th Cir. 1995) (“[W]e will uphold the Commission’s decision if, but only if, we can discern a reasoned path from the facts and considerations before the Commission to the decision it reached.” (internal citations omitted)).

43. See Home Box Off., Inc. v. FCC, 567 F.2d 9, 35 n.58 (D.C. Cir. 1977) (per curiam) (“[C]omments which themselves are purely speculative and do not disclose the factual or policy basis on which they rest require no response. There must be some basis for thinking a position taken in opposition to the agency is true.”); see also MCI WorldCom, Inc. v. FCC, 209 F.3d 760, 765 (D.C. Cir. 2000) (“An agency is not obliged to respond to every comment, only those that can be thought to challenge a fundamental premise.”).

44. U.S. Cellular Corp. v. FCC, 254 F.3d 78, 87 (D.C. Cir. 2001) (“[T]he Commission has no obligation to take the approach advocated by the largest number of commenters . . . [t]he Commission’s only responsibilities are to respond to comments and to choose a reasonable approach backed up by record evidence.” (internal citation omitted)).

45. See generally Craig Smith, Note, Taking “Due Account” of the APA’s Prejudicial Error Rule, 96 Va. L. Rev. 1727 (2013) (describing how courts have applied the prejudicial error doctrine).

46. 5 U.S.C. § 706.

47. See Gerber v. Norton, 294 F.3d 173, 182 (D.C. Cir. 2002) (explaining that the rule of prejudicial error requires that plaintiff must “indicate with reasonable specificity what portions of the documents it objects to and how it might have responded if given the opportunity” (internal citation omitted)).

48. See Shinseki v. Sanders, 556 U.S. 396, 406 (2009) (noting that section 706’s rule of prejudicial error follows the harmless-error rule ordinarily applied in civil case law).

49. See NARUC, 737 F.2d at 1121 (finding any error due to delayed disclosures of technical materials was “rendered harmless” because the agency’s final decision was “framed with adversarial comment in full view”).

50. See When AI Gets It Wrong: Addressing AI Hallucinations and Bias, MIT Mgmt. STS Teaching & Learning Tech. (last visited Jul. 14, 2024), https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/ (describing hallucinations as incorrect or misleading results generated by AI models due to insufficient training data, incorrect assumptions, or biases in the data); Ross Gruetzemacher, The Power of Natural Language Processing, Harv. Bus. Rev., https://hbr.org/2022/04/the-power-of-natural-language-processing (Apr. 19, 2022) (explaining recent exponential growth in AI models’ capacity for natural language processing).

51. See Moore v. Publicis Groupe, 287 F.R.D. 182, 193 (S.D.N.Y. 2012) (“[C]omputer-assisted review is an available tool and should be seriously considered for use in large-data-volume cases where it may save the producing party (or both parties) significant amounts of legal fees in document review.”).

52. See Exec. Order No. 14,110, 88 Fed. Reg. 75191, 75193 (Nov. 1, 2023) (stating a policy goal “to modernize Federal Government information technology infrastructure, remove bureaucratic obstacles, and ensure that safe and rights-respecting AI is adopted, deployed, and used”).

53. See generally Off. of Mgmt. & Budget, Exec. Off. of the President, OMB Memo M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (2024), https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf.