by Lindsey Schwartz*

The drastic expansion in collection of consumer personal data and concerning trends in children’s health and safety, correlated with their increased time spent online, further revealed the inadequacy of current regulatory protections and motivated a recent wave of children’s privacy and online safety legislation. As a result, companies are challenging the regulations’ constitutionality, primarily in the form of alleged First Amendment violations. This Contribution argues that, despite a recent court decision finding otherwise, the California Age-Appropriate Design Code regulates children’s privacy largely within the bounds of the First Amendment, while concurrently identifying the provisions of the Code that may be constitutionally uncertain. Finally, this Contribution proposes alternative options for age-appropriate design codes that would more easily survive First Amendment scrutiny while still protecting children’s privacy interests.


Technological advancements combined with widespread use of the internet and social media allow companies to collect, analyze, and profit from the use of consumer data at a volume previously impossible to contemplate, leading to a growing public concern about the erosion of privacy.1 Responding to federal inaction in this area, California enacted the California Consumer Privacy Act of 2018 (“CCPA”), the first law of its kind in the United States, which comprehensively regulates the collection and use of consumer personal information.2 Numerous states have since considered or enacted similarly comprehensive data privacy regulation.3

A similar pattern of legislation is emerging focused more narrowly on heightened regulation of children’s4 privacy and safety on the internet and social media; the California Age-Appropriate Design Code Act (“CAADCA”) was the first successfully enacted law in a wave of such legislation proposed in the United States.5 Growing public awareness and concern about the decline of youth mental health and safety connected with increased use of the internet and social media is driving this legislation regulating online platforms.6

While each state law related to children’s privacy and safety online has some unique qualities, they generally fall into one of two categories.7 The first category includes regulations that place limitations on minors’ access to social media—and sometimes additional online platforms (e.g., online games)—often including parental access and consent conditions and generally requiring users to submit to age-verification.8 The second category of regulations and the primary subject of this Contribution, is regulations, like CAADCA, focused on age-appropriate design that require online services likely to be accessed by minors to implement high privacy-by-design for its users who are minors and substantially limit collection and uses of minors’ personal information that are likely to harm them.9

As was the case with previous legislative efforts focused on protecting children, regulated entities are successfully challenging both types of laws under the First Amendment.10 Despite these initial legal challenges, legislators, mental health and family advocacy organizations, and parents remain eager for solutions that protect children’s privacy and well-being online.

******

The current legislative landscape provides limited and inadequate protections for children’s privacy and well-being online; public recognition of this problem is motivating bipartisan support for stronger regulation. Congress enacted the Children’s Online Privacy Protection Act (“COPPA”) in 1998 because it recognized the special risks to children’s privacy posed by widespread use of the internet, and it wanted to give parents more control over how online platforms used their children’s personal information.11 COPPA regulates the collection and use of personal information of children under the age of thirteen, but its protections apply in relatively narrow circumstances and have not kept pace with current practices.12 First, COPPA does not provide any protection for teens, ages thirteen to seventeen, and only applies to websites and services “directed at children” or services that have “actual knowledge” that they are collecting personal information from a child.13 These limitations on application encourage many online platforms to try to evade regulation by directing their websites and services to a general audience and avoiding “actual knowledge” of users under the age of thirteen.14 Second, the primary gatekeeper of regulated entities’ collection, use, and disclosure of a child’s personal information is “verifiable parental consent,” with notice of the types of personal information collected from the child; the opportunity to refuse further use, maintenance, or collection of that information; and reasonable means to obtain any personal information collected.15 The business then has a right to terminate service if a parent refuses to allow collection and use of the child’s information.16 Scholars largely agree that this method of privacy regulation, termed “notice and consent,” is not particularly effective due to the convoluted nature and extensive length of most privacy policies that provide “notice” and the fact that users either have to consent or not use the service at all.17 COPPA’s limitations provide little autonomy for users to decide what information companies collect and how they use that information.

Today, far more children of all ages regularly use online platforms and services, and spend much more of their time using them, than when COPPA was enacted in 1998.18 The online environment has also drastically changed and continues to evolve at a rapid pace.19 As a result, many of the concerns that originally motivated child protective legislation—online predation, financial extortion, manipulative marketing, cyberbullying—continue to expose children to harm.20 Experts have also identified a substantial increase in negative mental health outcomes of youth since 2009 and substantial research suggests that excess time online and on social media is a significant harmful factor for many children’s mental health.21 Public bipartisan support for increased regulation of online platforms with child users exploded after a Meta (formerly Facebook) whistleblower revealed internal company research indicating that Meta knew it was harming kids and yet intended to continue targeting even younger audiences.22

*****

Attempting to remedy these substantial harms, California enacted the California Age-Appropriate Design Code Act (“CAADCA”), which applies to businesses that provide an “online service, product, or feature likely to be accessed by children.”23 Unlike COPPA, CAADCA extends protections to children under the age of eighteen and expands the scope of regulated entities to all covered services “likely to be accessed by children” rather than only those directed at children.24 Recently, in Netchoice v. Bonta, the district court granted a preliminary injunction against CAADCA, in its entirety, because the court found that the challenged provisions likely violate the First Amendment.25 However, the court’s reasoning misapplies the Supreme Court’s commercial speech scrutiny standard, effectively holding CAADCA to a standard closer to strict scrutiny.26

CAADCA’s primary substantive provisions include a set of affirmative requirements and a set of prohibitions for covered businesses. These provisions can be grouped into three primary components: Data Protection Impact Assessments (“DPIA”), data privacy and protection mandates, and an age-estimation requirement—unless the business applies the data privacy and protection mandates to all users.27 The DPIA requires that businesses describe how their service “uses children’s personal information, and the risks of material detriment to children that arise from [their] data management practices,” create a plan to mitigate or eliminate those risks, and provide the DPIA to the Attorney General on written request.28 The data privacy and protection mandates include requirements to provide high default privacy settings to child users and limit the collection, use, storage, and sharing of children’s personal information.29 The age-estimation requirement provides that a business must either “[e]stimate the age of child users” within a level of certainty reasonable to the risks arising from the business’s data management practices or provide heightened privacy protections to all users.30

While some scholars emphasize that most data privacy laws permissibly regulate conduct, rather than protected speech, and should be subject to general rational basis review,31 the language of the Supreme Court’s decision in Sorrell v. IMS Health made the status of data privacy regulation under the First Amendment somewhat uncertain.32 Sorrell concerned a Vermont law prohibiting the collection and use of prescriber-identifying information for marketing purposes, but allowing the same information collection for other purposes.33 The Court determined that the law was subject to heightened scrutiny, even if the prescriber information was a commodity, rather than speech, because the law was a form of content- and speaker-based discrimination.34 Because there was not a compelling state interest to single out marketers as disfavored speakers or marketing as disfavored speech, the Court found that the law did not survive heightened scrutiny.35 Sorrell focuses on the viewpoint-based discrimination inherent in singling out marketers for disfavored regulation because of the content of their message, while acknowledging the important privacy interests that viewpoint-neutral regulation could advance.36 Since Sorrell, courts have generally applied lower standards of scrutiny than is applied in that case, while accepting that data privacy laws implicate expression.37

Similar to what other courts have done since Sorrell, the court in Netchoice v. Bonta determined that CAADCA must satisfy the Court’s test for regulation of commercial speech.38 Assuming the regulated speech is not misleading or unlawful, that test, articulated in Central Hudson, requires that the state’s interest is “substantial” and that the “regulation directly advances the state interest” by means that are “not more extensive than is necessary.”39 The court in Netchoice v. Bonta accepted the State’s substantial interest in protecting children’s health and well-being because there was sufficient evidence presented that “children are currently harmed by lax data and privacy protections online” and “the Supreme Court has repeatedly recognized a compelling interest in ‘protecting the physical and psychological well-being of minors.’”40 Despite this substantial interest, the court found that none of the challenged provisions satisfied the means-ends fit required to survive commercial speech scrutiny.41

However, the vast majority of the CAADCA’s provisions unquestionably satisfy a proper application of that standard. Central Hudson’s means-ends fit analysis is not a “least restrictive means standard;”42 it requires only that the means is proportional in scope to the interest advanced.43 The Netchoice v. Bonta court’s analysis instead repeatedly rejected provisions because they would not eliminate the stated harm or because the regulation may have some harmful outcomes rather than solely beneficial outcomes. For example, when analyzing the DPIA reporting requirements,44 the court rejected the means-ends fit between the State’s purported interest in requiring businesses to proactively assess the risk of harm to children posed by the business’s product design and data management practices rather than continue current practices that might reactively “remov[e] problematic features only after harm is discovered.”45 According to the court, the State did not “demonstrate that the DPIA provisions in fact address the identified harm” of eliminating harmful product design, but rather the DPIA only requires a plan to address harmful data management practices.46 However, the court failed to consider that the very product design the State identified as harmful was the use of children’s data to target them with harmful content and algorithms deliberately designed to make it more challenging for them to disengage with the product;47 thus a plan targeting data management inherently addresses harmful product design. The court also rejected an age appropriate policy language requirement,48 claiming that evidence did not show that providing privacy policies written at the college level constituted a harm to children, and even assuming that it is, the policy provision did not advance a solution to that harm.49 However, this contention ignores clear evidence that most adults do not understand these typically convoluted and lengthy privacy policies, which hinders their ability to make informed choices about their privacy.50 The court similarly rejected a prohibition on profiling child users by default51 because it found that Netchoice asserted that some profiling, such as profiling used to target resources about sexual orientation and gender identity to LGBT youth, is beneficial.52 In a particularly egregious example of the court’s sweeping analysis, it rejected a prohibition on knowingly harmful uses of children’s personal information53 by comparing it to the Third Circuit’s decision using strict scrutiny to evaluate a content-restrictive law in ACLU v. Mukasey, finding that the same concerns were present in this case.54 In each of these instances, the court utilized the language of commercial speech scrutiny, but in effect applied a strict scrutiny standard by expecting the State to use the best methods—in the court’s point of view—to eliminate stated harms without imposing any burdens on beneficial uses.55 These provisions do not prevent children from accessing anything that they search for or request themselves; they only limit the ways in which regulated businesses can target kids using their personal information.

Two provisions of the CAADCA may, however, be less clearly constitutional under the First Amendment: the content-related provisions of the DPIA56 and the age-estimation requirement.57 First, the CAADCA’s content-focused provisions within the DPIA reporting requirements, are especially susceptible to a compelled speech argument. Although the Court has determined that businesses can be required to disclose “purely factual and uncontroversial information about the terms under which . . .  services will be available,”58 they have also found that the government cannot otherwise compel companies to speak about controversial political matters.59 Because these provisions require companies to identify how their product design may expose children to harmful content, they first require companies to determine what specific content on their platforms is or could be harmful to children. While there may be certain types of content that most would agree are harmful to children—for example, content that promotes self-harm or eating disorders as the State references60—recent legislation limiting educators’ speech on issues of gender identity, sexual orientation, and race, and the increase in book bans targeting similar topics demonstrates that what content is harmful to children is a matter of political controversy.61 These ongoing debates make it less likely that a court would identify this type of DPIA requirement as purely factual, uncontroversial disclosure requirements typically permitted in commercial regulation.62

Second, the court in Netchoice v. Bonta argued that the age-estimation requirement in the CAADCA runs counter to privacy interests and presents the same First Amendment challenges as the age-verification schemes the Court previously rejected.63 Courts have previously rejected age-verification schemes because they substantially limited adult’s access to speech or chilled speech by requiring users to forgo anonymity.64 However, age-estimation may be materially different from age-verification and a narrow reading could allow the provision to survive. For example, age-estimation does not necessarily require identification of the user because it estimates a range using algorithms or assessments rather than a hard identifier, such as government identification.65 Additionally, the strict purpose limitation on using age-estimation data for no other purpose and the requirement to delete records of the data after that purpose is fulfilled further reduces privacy concerns.66 Still, First Amendment jurisprudence has substantial history of rejecting forms of age-verification, such that any attempt to justify even less onerous requirements will likely need significant evidence of the ability to retain anonymity.

******

Even though the court in Netchoice v. Bonta applied a higher-than-necessary standard, cautious legislators worried about First Amendment challenges should consider the court’s heavy focus on the CAADCA’s regulation of harmful content as instructive when they develop more narrowly tailored legislation to improve protection for children’s privacy. Legislators have the option of taking a substantially more cautious approach in response to recent First Amendment challenges or including some features that might be challenged but could still be defensible.

Cautious legislation could replace age-estimation with an actual or willfully blind standard that is more similar to current COPPA rules. Connecticut recently passed an act that takes this cautious approach by utilizing an actual knowledge, or willfully disregards standard for age.67 The Connecticut law prohibits processing of children’s personal information for the specific purposes of targeted advertising, sale of personal data, and profiling—unless reasonably necessary to provide the product—and the use of any design to “significantly increase, sustain or extend any minor’s use.”68 It also prohibits unnecessary collection of precise geolocation information.69 Further, minors over the age of thirteen may consent (i.e., opt-in) to otherwise prohibited data collection, but the regulation prohibits any consent mechanismthat is designed to substantially subvert or impair, or is manipulated with the effect of substantially subverting or impairing, user autonomy, decision-making or choice.”70 It maintains a more traditional DPIA reporting requirement without specification of any design features the assessment should describe.71

Since many issues with COPPA stem from its actual knowledge requirement, legislators might consider a less cautious approach by retaining some form of age-estimation requirement. A safer age-estimation requirement might include a self-declaration noting that it will not prevent access to the service and a secondary mechanism, such as usage patterns that indicate a likely child that would trigger another verification check such as those used by COPPA for parental consent—for example, an unrecorded video-chat with a trained representative.72 Since these methods would be used to provide children with a higher level of privacy protection while they access platforms, rather than to prohibit access, there would be less incentive to evade and it would still be more protective than current methods.73

Finally, legislators should consider retaining the DPIA reporting requirement specifics—while removing the content-based DPIA provisions from the CAADCA74—because the specifications are likely designed to remedy the knowledge imbalance between regulated entities and researchers created by the lack of transparency from technology companies about their design features and internal research.75


* Lindsey Schwartz is a J.D. Candidate (2024) at New York University School of Law. This Contribution was adapted from a final paper for the course Current Issues in Civil Rights and Civil Liberties, taught by Professor Steven Shapiro.

1. Colleen McClain, Michelle Faverio, Monica Anderson, & Eugenie Park, How Americans View Data Privacy, Pew Research Center 3 (Oct. 18, 2023) https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2023/10/PI_2023.10.18_Data-Privacy_FINAL.pdf (indicating that 81% of U.S. adults are concerned about how companies use their data).

2. Cal. Civ. Code §§ 1798.100–1798.199.

3. Thirteen other states have since enacted similarly comprehensive consumer privacy laws and eighteen states have currently active legislation. Int’l Ass’n of Priv. Professionals, US State Privacy Legislation Tracker 2024: Comprehensive Consumer Privacy Bills, https://iapp.org/media/pdf/resource_center/State_Comp_Privacy_Law_Chart.pdf.

4. “Children,” “minors,” or “youth” refer to persons under the age of eighteen, unless otherwise specified.

5. David Stauss & Keir Lamont, The Year That Was in State Privacy, IAPP: The Privacy Advisor (Oct. 20, 2023), https://iapp.org/news/a/the-year-that-was-in-state-data-privacy/.

6. Jennifer Bryant, The ‘“Big Shift”‘ Around Children’s Privacy, IAPP: The Privacy Advisor (Apr. 25, 2023), https://iapp.org/news/a/the-big-shift-around-childrens-privacy/. See also, Off. of the Surgeon Gen., Social Media and Youth Mental Health: The U.S. Surgeon General’s Advisory 3 (2023), https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf (calling attention to the growing concern about the effects of social media on youth).

7. See Bryant, supra note 6.

8. Id.

9. Id.

10. See generally Netchoice, L.L.C. v. Bonta, No. 22-cv-08861-BLF, 2023 U.S. Dist. LEXIS 165500 (N.D. Cal. Sep. 18, 2023) (granting a preliminary injunction against the CAADCA due to likely success on the First Amendment claim); Netchoice, L.L.C. v. Griffin, No. 5:23-CV-05105, 2023 U.S. Dist. LEXIS 154571 (W.D. Ark. Aug. 31, 2023) (granting a preliminary injunction against Arkansas’ Social Media Safety Act for the same).

11. 15 U.S.C. §§ 6501–6506 (1998). See Complying with COPPA: Frequently Asked Questions, Fed. Trade Comm’n, https://www.ftc.gov/tips-advice/business-center/guidance/complying-coppa-frequently-asked-questions (last visited Apr. 15, 2023).

12. See, e.g., Suzanne Kaufman, Note, The Invisible, Yet Omnipresent Ear: The Insufficiencies of the Children’s Online Privacy Protection Act, 78 N.Y.U. Ann. Surv. Am. L. 101, 130 (noting low compliance rates and third-party data-sharing not contemplated by COPPA).

13. 15 U.S.C. § 6502 (a)(1).

14. See Declaration of Serge Egelman, Ph.D., at 19–20, Netchoice v. Bonta, 2023 U.S. Dist. LEXIS 165500 (describing how companies avoid actual knowledge and evade regulation).

15. 15 U.S.C. § 6502 (b)(1)(A), (B).

16. Id. at (b)(3).

17. See Salon Barocas & Helen Nissenbaum, On Notice: The Trouble with Notice and Consent 5 (Oct. 2009) (unpublished manuscript), https://nissenbaum.tech.cornell.edu/papers/ED_SII_On_Notice.pdf (explaining why notice and consent is “fundamentally inadequate” under current conditions, specifically noting that users “confront . . . full-on barriers to achieving meaningful understanding of the practice and uses to which they are expected to be able to consent”). See also, Aleecia McDonald & Lorrie Cranor, The Cost of Reading Privacy Policies, 4 J. L. & Pol’y Info. Soc’y 543, 564 (2008) (finding that it would take up to 300 hours per year to read all the privacy policies the average user agrees to); George Milne, Mary Culnan & Henry Greene, A Longitudinal Assessment of Online Privacy Notice Readability, 25 J. Pub. Pol’y & Marketing 238, 243, 245 (2006) (finding that most privacy policies are written at the college level and challenging for the average consumer to understand).

18. Tweens, ages eight to twelve, engaged with screen entertainment for 5.5 hours a day on average in 2021, compared with 4.5 hours in 2015. Common Sense Media, The Common Sense Census: Media Use by Tweens and Teens 3 (2021), https://www.commonsensemedia.org/sites/default/files/research/report/8-18-census-integrated-report-final-web_0.pdf. In 2022, 46% of teens said they use the internet almost constantly and 97% said they use it at least daily; even as recently as 2014–2015, those numbers were 24% and 92%, respectively. Emily A. Vogels, Risa Gelles-Watnick & Navid Massarat, Teens, Social Media and Technology 2022, Pew Research Center 8 (2022) https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2022/08/PI_2022.08.10_Teens-and-Tech_FINAL.pdf.

19. The FTC, as the primary enforcement authority responsible for promulgating COPPA rules, initiated rule review ahead of the Commission’s regular schedule twice in response to rapid evolution in the online environment for kids: first in 2010, leading to amendments promulgated in 2013, and again in 2019. See Request for Public Comment on the Federal Trade Commission’s Implementation of the Children’s Online Privacy Protection Rule, 84 Fed. Reg. 35842 (Jul. 25, 2019) (to be codified at 16 C.F.R. pt. 312).

20. See Off. of the Surgeon Gen., Social Media and Youth Mental Health: The U.S. Surgeon General’s Advisory 8–9 (2023), https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf (outlining the risk of harms to children from “content exposure”); FBI and Partners Issue National Public Safety Alert on Financial Sextortion Schemes, Fed. Bureau of Investigation National Press Office (Dec. 19, 2022), https://www.fbi.gov/news/press-releases/fbi-and-partners-issue-national-public-safety-alert-on-financial-sextortion-schemes (alerting the public to the increase in incidents of “financial sextortion” of children and teens online).

21. See Kathleen Ethier, Dear Colleague, Youth Risk Behavior Surveillance Data Summary & Trends Report: 2009–2019, Ctrs. for Disease Control and Prevention (Oct. 23, 2020), https://www.cdc.gov/nchhstp/dear_colleague/2020/dcl-102320-YRBS-2009-2019-report.html; Off. of the Surgeon Gen., supra note 20, at 6–7; American Psych. Ass’n, Health Advisory on Social Media Use in Adolescence 7 (2023), https://www.apa.org/topics/social-media-internet/health-advisory-adolescent-social-media-use.pdf.

22. See Bobby Allen, Here Are 4 Key Points from the Facebook Whistleblower’s Testimony on Capitol Hill, NPR (Oct. 5, 2021, 9:30 PM), https://www.npr.org/2021/10/05/1043377310/facebook-whistleblower-frances-haugen-congress (describing internal Meta (formerly Facebook) studies about the negative impact of Instagram on child users and the bipartisan agreement on the need for regulation); Monica Anderson & Michelle Faverio, 81% of U.S. Adults—Versus 46% of Teens—Favor Parental Consent for Minors to Use Social Media, Pew Research Center (Oct. 31, 2023), https://www.pewresearch.org/short-reads/2023/10/31/81-of-us-adults-versus-46-of-teens-favor-parental-consent-for-minors-to-use-social-media/ (showing that 81% of adults and 46% of teens support parental consent to use social media, 71% of adults and 56% of teens support age-verification, and 69% of adults and 34% of teens support time limits for minors).

23. Cal. Civ. Code § 1798.99.31(a). CAADCA is substantially modeled on the United Kingdom’s Age Appropriate Design Code; the California legislature declared an intention that businesses covered by CAADCA “may look to guidance and innovation in response” to the UK code when developing their services and that the California Children’s Data Protection Working Group, created by CAADCA, consider the guidance from the Information Commissioner’s Office of the UK “when developing and reviewing best practices” related to CAADCA. 2022 Cal Stats. ch. 320.

24. Cal. Civ. Code § 1798.99.30(b)(1), (4).

25. Netchoice, L.L.C. v. Bonta, No. 22-cv-08861-BLF, 2023 U.S. Dist. LEXIS 165500, at *64 (N.D. Cal. Sep. 18, 2023).

26. See id. at *32.

27. Cal. Civ. Code § 1798.99.31(a), (b).

28. Id. § 1798.99.31(a)(1)–(4).

29. Id. § 1798.99.31(a)(6)–(9), (b).

30. Id. § 1798.99.31(a)(5).

31. See, e.g., Neil M. Richards, Reconciling Data Privacy and the First Amendment, 52 UCLA L. Rev. 1149, 1173 (2005) (“Ordinary public and private law rules regulating businesses engaged in the trade in customer data would be, like other forms of commercial regulation, outside the scope of the First Amendment and thus subject to rational basis review. Examples of such laws would include a paradigmatic code of fair information practices regulating the commercial assembly, processing, and use of large-scale consumer databases.”).

32. Sorrell v. IMS Health, 564 U.S. 552 (2011). See, e.g., Agatha Cole, Note, Internet Advertising After Sorrell v. IMS Health: A Discussion on Data Privacy & the First Amendment, 30 Cardozo Arts & Ent. L.J. 283, 285 (2012).

33. Sorrell, 564 U.S. at 562–63.

34. Id. at 570–71.

35. Id. at 577–79.

36. Sorrell, 564 U.S. at 579–80 (“The capacity of technology to find and publish personal information, including records required by the government, presents serious and unresolved issues with respect to personal privacy and the dignity it seeks to secure. In considering how to protect those interests, however, the State cannot engage in content-based discrimination to advance its side of a debate.”).

37. See, e.g., ACA Connects – Am’s Commcn’s Ass’n v. Frey, 471 F.Supp.3d 318, 327 (D. Me. 2020) (presuming that Maine’s privacy statute implicates the First Amendment and applying commercial speech scrutiny); ACLU v. Clearview AI, Inc., 2021 Ill. Cir. LEXIS 292, *17–20, *25 (Ill. Cir. Ct. Aug. 27, 2021) (finding that Sorrell does not require a higher level of scrutiny for Illinois’ Biometric Information Privacy Act because it is content-neutral and determining that it survives intermediate scrutiny).

38. Netchoice v. Bonta, 2023 U.S. Dist. LEXIS 165500, at *32.

39. Cent. Hudson Gas & Elec. Corp. v. Pub. Serv. Comm’n, 447 U.S. 557, 566 (1980).

40. Netchoice v. Bonta, 2023 U.S. Dist. LEXIS 165500, at *34 (quoting Sable Commc’ns Cal., Inc. v. FCC, 492 U.S. 115, 126 (1989)).

41. Id. at *35–64.

42. Lorillard Tobacco Co. v. Reilly, 533 U.S. 525, 556 (2001). See also, Posadas de P.R. Assocs. v. Tourism Co., 478 U.S. 328, 341 (1986) (“The last two steps of the Central Hudson [commercial speech] analysis basically involve a consideration of the “fit” between the legislature’s ends and the means chosen to accomplish those ends.”) (citing Central Hudson, 447 U.S. 557).

43. See Bd. of Trustees of State Univ. of N.Y. v. Fox, 492 U.S. 469, 480 (1989) (“What our decisions require is . . . a fit that is not necessarily perfect, but reasonable; that represents not necessarily the single best disposition but one whose scope is ‘in proportion to the interest served[].’”) (quoting In re R. M. J., 455 U.S. 191, 203 (1982)).

44. Cal. Civ. Code § 1798.99.31(a)(1)–(4).

45. Netchoice v. Bonta, 2023 U.S. Dist. LEXIS 165500, at *36.

46. Id. at *37.

47. See Defendant’s Opposition to Plaintiff’s Motion for Preliminary Injunction at 17, Netchoice v. Bonta, 2023 U.S. Dist. LEXIS 165500 (“As long as businesses do not use children’s data, they can continue to offer, provide access to, and recommend any content they want.”); Declaration of Serge Egelman, Ph.D. at 21–22, Netchoice v. Bonta, 2023 U.S. Dist. LEXIS 165500 (describing how algorithms are used to increase engagement and other ways they can harm consumers).

48. Cal. Civ. Code § 1798.99.31(a)(7).

49. Netchoice v. Bonta, 2023 U.S. Dist. LEXIS 165500, at *46–47.

50. See e.g., Aleecia McDonald & Lorrie Cranor, The Cost of Reading Privacy Policies, 4 J. L. & Pol’y Info. Soc’y 543, 564 (2008) (finding that it would take up to 300 hours per year to read all the privacy policies the average user agrees to); George Milne, Mary Culnan & Henry Greene, A Longitudinal Assessment of Online Privacy Notice Readability, 25 J. Pub. Pol’y & Marketing 238, 243, 245 (2006) (finding that most privacy policies are written at the college level and challenging for the average consumer to understand); Colleen McClain, Michelle Faverio, Monica Anderson, & Eugenie Park, How Americans View Data Privacy, Pew Research Center 3 (Oct. 18, 2023) https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2023/10/PI_2023.10.18_Data-Privacy_FINAL.pdf (finding that 67% of U.S. adults believe they have little to no understanding of what companies are doing with their personal data).

51. Cal. Civ. Code § 1798.99.31(b)(2).

52. Netchoice v. Bonta, 2023 U.S. Dist. LEXIS 165500, at *54–56. The court referenced this rejection of the profiling provision and applied identical reasoning concerning beneficial uses of data to the provision restricting the collection, sale, and retention of children’s personal information. Id. at *56–57.

53. Cal. Civ. Code § 1798.99.31(b)(1).

54. The court acknowledged that ACLU v. Mukasey evaluated the law at issue under strict scrutiny, but gave no further reasoning for why it was relevant to a commercial speech scrutiny analysis of the CAADCA. Netchoice v. Bonta, 2023 U.S. Dist. LEXIS 165500, at *52–53. See also ACLU v. Mukasey, 534 F.3d 181, 191–92 (3d Cir. 2008) (finding the Child Online Protection Act (COPA), a law that prohibited transmitting minors materials (i.e., content) that are harmful to them, was not narrowly tailored).

55. Commercial speech scrutiny, unlike strict scrutiny, does not require that a regulation completely eliminate the asserted harm or use methods that the court believes are the best way to advance the State’s interest. See Greater New Orleans Broad. Ass’n v. United States, 527 U.S. 173, 188 (1999) (“The Government is not required to employ the least restrictive means conceivable[.]”).

56. Cal. Civ. Code § 1798.99.31(a)(1)(B)(i), (iii) (requiring the DPIA to include an assessment of whether the design of the service could harm children “by exposing [them] to harmful, or potentially harmful, content” or “permit children to witness . . . harmful, or potentially harmful conduct”).

57. Cal. Civ. Code § 1798.99.31(a)(5).

58. Zauderer v. Off. of Disciplinary Counsel of S. Ct., 471 U.S. 626, 651 (1985).

59. See Nat’l Inst. of Family & Life Advocates v. Becerra, 138 S. Ct. 2361, 2372 (2018) (invalidating a law that required a disclosure related to abortion because it was “anything but an ‘uncontroversial’ topic” on which the State could compel disclosure).

60. Defendant’s Opposition to Plaintiff’s Motion for Preliminary Injunction at 5, Netchoice v. Bonta, 2023 U.S. Dist. LEXIS 165500 (“[C.hildren may take part in extreme content generation (e.g., challenges) to receive validation online or engage in other harmful activity, such as disordered eating, self-harm, or gambling, based on what they are seeing online.”).

61. See e.g., Fla. Stat. § 1000.05(4)(a) (limiting educator speech about race); Fla. Stat. § 1000.42(10) (limiting educator speech on LGBT identities); Book Ban Data, Am. Libr. Ass’n (Sept. 2023), https://www.ala.org/advocacy/bbooks/book-ban-data (emphasizing the significant increase in challenged books in school and public libraries in 2023).

62. See Becerra, 138 S. Ct. at 2372 (refusing to apply the Zauderer standard because the notice at issue was not limited to factual, uncontroversial information).

63. Netchoice v. Bonta, 2023 U.S. Dist. LEXIS 165500, at *40.

64. See, e.g., Reno v. ACLU, 521 U.S. 844, 882 (1997) (finding that “the CDA places an unacceptably heavy burden on protected speech” in part because the age-verification defense was not feasible for most Internet businesses and would bar adults without credit cards); Am. Booksellers Found. v. Dean, 342 F.3d 96, 99 (2d Cir. 2003) (finding that age-verification required users to forgo anonymity).

65. Tony Allen, Lynsey McColl, Katharine Walters, & Harry Evans, Measurement of Age Assurance Technologies: A Research Report For The Information Commissioner’s Office, Info. Commr’s Off. 10–12, 26 (2022).

66. Cal. Civ. Code § 1798.99.31(b)(8).

67. Conn. Gen. Stat. § 42-529a(a) (2023).

68. Id. § 42-529a(b)(1) (2023).

69. Id. § 42-529a(b)(2) (2023).

70. Id. § 42-529a(b)(3), (c) (2023).

71. Id. § 42-529b (2023).

72. See Complying with COPPA: Frequently Asked Questions, Fed. Trade Comm’n, https://www.ftc.gov/tips-advice/business-center/guidance/complying-coppa-frequently-asked-questions (last visited Apr. 15, 2024) (describing approved methods of obtaining parental consent including “[h]aving the parent call a toll-free telephone number staffed by trained personnel, or have the parent connect to trained personnel via video-conference”).

73. Jennifer Bryant, The ‘Growing Ecosystem’ of Age Verification, IAPP: The Privacy Advisor (Mar. 28, 2023), https://iapp.org/news/a/the-growing-ecosystem-of-age-verification/ (noting kids’ ability to evade age-verification systems and that those designed primarily to restrict use, rather than provide benefits like high default privacy, encourages evasion).

74. Cal. Civ. Code § 1798.99.31(a)(1)(B)(i), (iii).

75. As the Meta whistleblower reports demonstrate, companies have access to the data to determine the scope and source of harms on their platforms. See Bobby Allen, Here Are 4 Key Points from the Facebook Whistleblower’s Testimony on Capitol Hill, NPR (Oct. 5, 2021), https://www.npr.org/2021/10/05/1043377310/facebook-whistleblower-frances-haugen-congress. See also, Off. of the Surgeon Gen., Social Media and Youth Mental Health: The U.S. Surgeon General’s Advisory 11 (2023), https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf (“There is broad concern among the scientific community that a lack of access to data and lack of transparency from technology companies have been barriers to understanding the full scope and scale of the impact of social media on mental health and well-being.”).