Congress wants AI chatbots for kids to work like family software
Congress is trying to push ChatGPT-style products toward supervised family software when a child is on the other end. A bipartisan Senate bill would require major consumer chatbot providers to build family accounts for known child users, let parents see a child's full conversation record, and get verifiable parental consent before a known teen can create an account, while also saying the law should not be read to require age gating or age verification.
This one is more specific than the usual AI-harm speech. The Senate bill text says a covered provider that knows a user is a child under 13 must require that child to create and maintain a family account to access the chatbot. For a known teen, the provider must give direct notice to a parent and obtain verifiable parental consent before the teen can create an account or profile. Reuters separately reported that the bill would require AI chatbot companies to offer family accounts where parents could view their children's chat logs and set time limits. U.S. Senate bill text Reuters
The primary text is blunt about the account architecture it wants. Family accounts must let a parent determine privacy and account settings, including limiting the amount of time a child or teen can spend using the chatbot, disabling notifications and push alerts, disabling financial transactions made available through the chatbot, and requiring a transparency label at intervals the parent sets. The bill also says the parent must be able to access a full record of the child's or teen's conversations and activity, along with features to monitor and analyze that record at scale. U.S. Senate bill text
It also reaches into data handling. The bill would bar a covered entity from using the personal data of a user it knows is a child or teen for targeted advertising. If a provider terminates an existing child or teen account because the user does not move into the required family-account or consent structure, the bill says it must immediately delete the personal data collected from or submitted by that user, subject to a 90-day access window for the user or parent to request a copy. U.S. Senate bill text
The scope is narrower than a headline about regulating "AI chatbots" suggests. The bill defines an artificial intelligence chatbot as an open-ended conversational or multimodal system, then excludes tools limited to narrow purposes such as customer service, business operations, productivity and analysis related to source information, internal research, technical assistance, or educational products and services. In other words, this is aimed at broad companion-style or open-ended consumer chatbot products, not every model-flavored interface on the internet. U.S. Senate bill text
The awkward part is that Congress wants all of this without squarely touching the age-verification fight. The same bill says nothing in the act should be construed to require a covered entity to implement age gating or age verification functionality, or to collect age data it is not already collecting in the normal course of business. But the enforcement section still says regulators may decide whether a company knew a user was a child or teen based on competent and reliable evidence and the totality of circumstances, including whether a reasonable and prudent person would have known it. Lawmakers are trying to dodge the ugliest online-safety argument while still imposing duties that depend on knowing who is a minor. U.S. Senate bill text
That tension matters more than the slogan. A formal no-age-gate clause does not remove the product pressure to infer age, add consent flows, or build more elaborate account controls if companies want to reduce legal risk. The bill does not regulate all AI. It tries to regulate one specific product layer: the account design, parental controls, and data rules around general-purpose chatbots used by minors.
The political force behind it is not abstract. Reuters said Senators Ted Cruz and Brian Schatz introduced the measure this week as part of a broader push on AI chatbot harms. Axios reported that grieving parents have been pressing lawmakers directly. In written testimony to the Senate Judiciary Committee last year, Matthew Raine said ChatGPT mentioned suicide 1,275 times in conversations with his 16-year-old son Adam before his death. That does not prove this bill works. It does explain why Congress has moved from generic hearings to interface-level mandates. Reuters Axios Senate Judiciary witness testimony
The Senate Commerce Committee says the bill has backing from 18 groups, including Americans for Responsible Innovation and the America First Policy Institute. The real signal is not ideological harmony for its own sake. It is that parental-control architecture is emerging as one of the more politically saleable ways to regulate consumer AI. Easier to sell than licensing the whole industry, easier to defend than trying to ban the product outright. Senate Commerce Committee press release
The obvious caveat is that introduced bills die all the time, and this one still leans on the squishy category of a "known" child or teen. If providers can avoid knowledge, or if courts decide the compliance burden effectively forces age verification by another name, the clean bipartisan framing gets messier fast. Still, the direction is clear enough. Washington is starting to sketch what the account architecture of consumer chatbots may be expected to look like when minors are involved.