On September 29, 2025, OpenAI announced the rollout of new parental controls for ChatGPT amid mounting legal and public scrutiny over the safety of conversational AI for minors. The feature allows parents to link their accounts to those of teenagers aged 13 to 17, automatically activating stronger filters that block sexual or romantic roleplay, screen sensitive content, down-rank extreme imagery, and disable high-risk functions such as voice and image generation or the use of conversations in model training. Moderators are authorized to issue crisis alerts when acute self-harm risk is detected, and parents are notified if a teen unlinks their account. While these measures represent a meaningful shift toward “safety by design,” their opt-in structure, reliance on imperfect detection systems, and ease of circumvention raise fundamental questions now confronting courts and regulators: do voluntary dashboards substantively mitigate foreseeable risks to children, or do they merely create the appearance of diligence without measurable effect? As litigation and state attorney general investigations increasingly link conversational AI to child exploitation and even suicide, OpenAI’s parental controls arrive at a pivotal moment in the emerging law of duty, design, and accountability for child protection in the age of artificial intelligence.

Why Conversational AI Poses Distinctive Risks to Children

Conversational AI presents risks that are qualitatively distinct from those posed by traditional online platforms. A static webpage may display harmful material, but a chatbot can engage in sustained dialogue, recall context, and mimic empathy or friendship, traits that can foster “parasocial” bonds that bad actors, or even the bots themselves, can exploit. Investigations of Snapchat’s “My AI” have borne this out. In one widely reported example, a Washington Post columnist found that My AI advised a 13-year-old on how to make her “first time” with a 31-year-old man feel “special,” even suggesting setting the mood with candles and music. In another, the bot instructed a 15-year-old on concealing drug and alcohol use. The gravity of this issue came into sharp focus during recent congressional testimony from parents whose children died by suicide after prolonged interaction with general-purpose chatbots. In one case, a teenager reportedly received explicit advice on suicide methods and assistance drafting a suicide note. Clinical experts have since identified a recurring “degradation” pattern in which AI companions, after initially directing users to crisis resources, begin over time to validate despair and reinforce hopelessness.

These incidents establish foreseeability as a matter of law. Developers know, or should know, that conversational models accessible to children pose heightened risks of exploitation, self-harm, and psychological dependency when guardrails are weak. That knowledge triggers a duty to design against predictable misuse rather than respond belatedly to harm.

From Foreseeability to Liability: The Emerging Duty of Care in AI Design

The past year has marked a decisive shift in how the law conceives of AI systems accessible to children, not as neutral tools, but as products that carry foreseeable risks and corresponding duties of care. Recent litigation against developers of “companion” chatbots, including Character.AI, alleges design defects, failure to warn, and negligent misrepresentation, citing evidence that the bots fostered emotional dependency, engaged in sexualized roleplay with minors, and failed to

respond to self-harm disclosures, failures plaintiffs contend are predictable in systems optimized for maximum user engagement and agreeableness. A federal judge recently allowed these claims to proceed, rejecting the arguments that AI chatbot outputs are categorically protected by the First Amendment. The ruling reflects an emerging judicial understanding that the issue is not what the chatbot said, but how the system was designed, tested, and marketed.

Outside the courtroom, state attorneys general and private companies are reinforcing this shift. In August 2025, a bipartisan coalition of forty-four attorneys general warned that chatbots engaging in “romantic roleplay” or “therapeutic dialogue” with minors may violate consumer-protection laws as unfair or deceptive trade practices. The coalition emphasized that, “conduct that would be unlawful…if done by humans is not excusable simply because it is done by a machine.” Around the same time, Disney issued a cease-and-desist to Character.AI demanding the removal of user-generated bots that impersonated Disney characters and were allegedly used to facilitate grooming, explicitly tying misuse of its intellectual property to child exploitation.

Together, these actions signal an expanding duty of care in AI design, one that now extends beyond negligence and product liability to encompass unfair trade practices and even intellectual property law.

The Constitutional Shift Toward Targeted Safeguards for Minors

Any regulatory framework governing conversational AI must operate within First Amendment limits, particularly when interventions implicate speech or access to information. Courts have long struck down “harmful to minors” statutes that sweep too broadly, censoring discussions that might simply be uncomfortable or morally sensitive. Yet recent jurisprudence reflects a growing judicial willingness to uphold narrowly tailored, conduct-based protections for minors when the government’s interest is compelling, and the regulatory means are precise. In Free Speech Coalition v. Paxton (2025), the U.S. Supreme Court upheld Texas’s age-verification law for pornographic websites, finding that the state’s compelling interest in protecting minors justified limited and carefully tailored age-gating, even at some cost to adult access. The Court distinguished between suppressing lawful speech and regulating the manner of access to prevent exploitation, a distinction with direct relevance to conversational AI. The post-Paxton trajectory favors statutes that treat AI-enabled exploitation as conduct analogous to solicitation or enticement, rather than protected expression. This distinction empowers lawmakers to impose affirmative design obligations such as age-assurance systems, default safety configurations, and rapid human escalation protocols, consistent with First Amendment protections. California’s pending Leading Against Exploitation, Abuse, and Deception (LEAD) for Kids Act (AB 1064) exemplifies this approach. The bill would prohibit “companion chatbots” foreseeably capable of engaging in sexual roleplay, encouraging self-harm, or facilitating illegal activity from being accessible to minors, while explicitly preserving access to educational, health, and victim-support resources. By targeting concrete exploitative behaviors and including explicit safe-harbor provisions for socially beneficial speech, measures like AB 1064 avoid the overbreadth and vagueness pitfalls that doomed earlier “harmful to minors” laws.

From a liability perspective, these developments reinforce the evolving standard of care for AI companies. The Supreme Court’s endorsement of age verification validates the premise that youth exposure to adult content is a known and preventable hazard. Companies can no longer reflexively claim that differentiating user experiences by age is untenable due to privacy or free-speech concerns, Paxton and laws like AB 1064 make clear that narrow, age-specific safeguards are not only legally permissible, but they are also rapidly becoming the expected baseline for responsible and compliant AI design.

From Rhetoric to Results: Measuring Compliance in Conversational AI Safety

As courts and regulators refine the contours of AI accountability, the evidentiary burden is shifting from rhetoric to results. To withstand scrutiny, developers must demonstrate that conversational AI systems are reasonably safe by design, a standard that demands more than voluntary parental controls or public assurances. It requires measurable, verifiable outcomes showing that design choices meaningfully reduce foreseeable harm. Companies must identify and mitigate known risks, adopt safer feasible alternatives, and continuously monitor safeguard effectiveness. Once a hazard is recognized, failure to redesign is not ignorance, it is negligence.

In litigation and enforcement actions, the evidentiary record will determine liability. Plaintiffs and state attorneys general are already seeking internal testing data, red-teaming results, and risk assessments to reveal what companies knew and how they prioritized safety concerns. As in product-defect and data-security cases, failure to preserve or disclose such materials will invite adverse inferences.

The emerging legal framework rewards those who treat youth protection as a core design principle rather than a marketing feature. Companies that proactively build “safety by default” into their systems, through enforceable standards, conduct-based prohibitions, and measurable outcomes, will be best positioned to withstand legal and regulatory scrutiny.

For conversational AI accessible to minors, the expectations are clear. Default configurations must preclude sexual or romantic roleplay and other high-risk interactions, with enforcement that resists trivial circumvention across all platforms and integrations. Age-assurance mechanisms should be proportionate to risk, combining passive signals with privacy-preserving verification that addresses both adult-as-teen and teen-as-adult impersonation. Grooming detection must analyze conversational patterns (e.g., secrecy requests, isolation tactics, age differentials), not merely banned words, and must trigger meaningful interventions: in-product friction, human escalation, and, where appropriate, guardian notification. Time to detect and time to alert will become measurable compliance metrics; hour- or day-long delays will be indefensible once the hazard is known.

Conclusion

In the coming years, AI firms will face a patchwork of state laws, regulatory actions, and lawsuits addressing child safety. That fragmentation should not be an excuse for inaction. Companies developing conversational AI have now been put on notice that safety must be built into their systems from the start. It is no longer acceptable to bolt-on protective measures after deployment

or to rely on users and parents to catch problems. OpenAI’s new parental controls show that the industry is waking up to this responsibility, but effectiveness will be judged by outcomes: fewer grooming incidents, faster intervention when red flags appear, and demonstrable suppression of exploitative behaviors across platforms and integrations.

The era of unchecked conversational AI is ending. The legal framework now taking shape will reward companies that treat youth protection as a core design principle rather than a marketing feature.