
Two senators said they are announcing bipartisan legislation on Tuesday to crack down on tech companies that make artificial intelligence chatbot companions available to minors, after complaints from parents who blamed the products for pushing their children into sexual conversations and even suicide.
The legislation from Sens. Josh Hawley, R-Mo, and Richard Blumenthal, D-Conn., follows a congressional hearing last month at which several parents delivered emotional testimonies about their kids’ use of the chatbots and called for more safeguards.
“AI chatbots pose a serious threat to our kids,” Hawley said in a statement to NBC News.
“More than seventy percent of American children are now using these AI products,” he continued. “Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology.”
Sens. Katie Britt, R-Ala., Mark Warner, D-Va., and Chris Murphy, D-Conn., are co-sponsoring the bill.
The senators’ bill has several components, according to a summary provided by their offices. It would require AI companies to implement an age-verification process and ban those companies from providing AI companions to minors. It would also mandate that AI companions disclose their nonhuman status and lack of professional credentials for all users at regular intervals.
And the bill would create criminal penalties for AI companies that design, develop or make available AI companions that solicit or induce sexually explicit conduct from minors or encourage suicide, according to the summary of the legislation.
Mandi Furniss, a Texas mother, appeared at a news conference Monday in support of the legislation. She blames an AI chatbot for pushing her son toward self-harm, and she said tech companies need to be held accountable for the services they offer.
“If it was anybody else, if it was a person, they would be in jail, so we have to treat this as such, too,” she said.
She said she was shocked by how the AI chatbot appeared to alter her son’s personality.
“It took a lot of investigating to realize that it wasn’t bullying from children or people at school. The bullying was the app. The app itself is bullying our kids and causing them mental health issues,” she said.
Blumenthal said that tech companies cannot be trusted to do the right thing on their own.
“In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” Blumenthal said in a statement. “Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties.”
“Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety,” he continued.
ChatGPT, Google Gemini, xAI’s Grok, Meta AI and Character.AI all allow kids as young as 13 years old to use their services, according to their terms of service.
The newly introduced legislation is likely to be controversial in several respects. Privacy advocates have criticized age-verification mandates as invasive and a barrier to free expression online, while some tech companies have argued that their online services are protected speech under the First Amendment.
The Chamber of Progress, a left-leaning tech industry trade group, criticized the bill.
“We all want to keep kids safe, but the answer is balance, not bans,” said K.J. Bagchi, the chamber’s vice president of U.S. policy and government relations, in a statement. “It’s better to focus on transparency when kids chat with AI, curbs on manipulative design, and reporting when sensitive issues arise.”
Other bipartisan efforts to regulate tech companies — including a proposed Kids Online Safety Act or comprehensive privacy legislation — have fallen short of becoming law, at least in part because of free speech concerns.
At a news conference Monday afternoon, Hawley framed the latest legislation as a test of tech companies’ sway in Congress.
“Congress hasn’t acted on this issue because of money. It’s because of the power of the tech companies,” Hawley said. “There ought to be a sign outside of the Senate chamber that says ‘bought and paid for by Big Tech’ because the truth is, almost nothing that they object to crosses that Senate floor.”
Hawley and Blumenthal are calling their bill the Guidelines for User Age-verification and Responsible Dialogue Act, or GUARD Act.
Hawley declined to say whether the bill has the support of President Donald Trump. In an email, a White House spokesperson declined to comment.
The bill received tentative support from ParentsSOS, a group of families who say they were affected by online harms, but the group said they were suggesting changes and wanted the bill to address app features that “maximize engagement to the detriment of young peoples’ safety and well-being.”
The legislation comes at a time when AI chatbots are upending parts of the internet. Chatbots apps such as ChatGPT and Google Gemini are among the most-downloaded software on smartphone app stores, while social media giants such as Instagram and X are adding AI chatbot features.
But teenagers’ use of AI chatbots has drawn scrutiny including after several suicides, including when the chatbots allegedly provided teenagers with directions. OpenAI, the maker of ChatGPT, and Character.AI, which provides character and personality-based chatbots, are both facing wrongful death suits.
Responding to a wrongful death suit filed by the parents of 16-year-old Adam Raine, who died by suicide after consulting with ChatGPT, OpenAI said in a statement that it was “deeply saddened by Mr. Raine’s passing, and our thoughts are with his family,” adding that ChatGPT “includes safeguards such as directing people to crisis helplines and referring them to real-world resources.”
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” a spokesperson said. “Safeguards are strongest when every element works as intended, and we will continually improve on them. Guided by experts and grounded in responsibility to the people who use our tools, we’re working to make ChatGPT more supportive in moments of crisis by making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens.”
On Monday, OpenAI said in a statement about the bill: “We will continue to partner with parents, clinicians, and policymakers to make sure technology supports the safety and well-being of young people.” The company said it is focused on suicide-prevention measures, parental controls and tools to predict the ages of users so that minors have appropriate experiences with ChatGPT.
In response to a separate wrongful death suit filed by the family of 13-year-old Juliana Peralta, Character.AI said: “Our hearts go out to the families that have filed these lawsuits, and we were saddened to hear about the passing of Juliana Peralta and offer our deepest sympathies to her family.”
“We care very deeply about the safety of our users,” a spokesperson continued. “We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users. We also work with external organizations, including experts focused on teenage online safety.”
Character.AI argued in a federal lawsuit in Florida that the First Amendment barred liability against media and tech companies arising from allegedly harmful speech, including speech resulting in suicide. In May, the judge in the case declined to dismiss the lawsuit on those grounds but said she would hear the company’s First Amendment argument at a later stage.
OpenAI says it is working to make ChatGPT more supportive in moments of crisis, for example by making it easier to reach emergency services, while Character.AI says it has also worked on changes, including a pop-up that directs users to the National Suicide Prevention Lifeline when self-harm comes up in a conversation.
Meta, the owner of Instagram and Facebook, was criticized after Reuters reported in August that an internal company policy document permitted AI chatbots to “engage a child in conversations that are romantic or sensual.” Meta removed that policy and has announced new parental controls for teens’ interactions with AI. Instagram has also announced an overhaul of teen accounts with the goal of making their experience similar to viewing PG-13 movies.
Hawley announced an investigation of Meta following the Reuters report.
If you or someone you know is in crisis, call 988 to reach the Suicide and Crisis Lifeline. You can also call the network, previously known as the National Suicide Prevention Lifeline, at 800-273-8255, text HOME to 741741 or visit SpeakingOfSuicide.com/resources for additional resources.
