WASHINGTON – At an Oct. 8 Senate Health, Education, Labor and Pensions Committee (HELP) hearing, lawmakers examined how artificial intelligence is reshaping health care — from easing paperwork to raising new safety concerns, including reports of chatbot-linked teen suicides.
Sen. Bill Cassidy (R-La.), chairman of the committee, set the tone of the hearing during his opening statement, highlighting AI’s potential to accelerate drug discovery and lower health care costs while warning of its darker consequences, including recent parent allegations that chatbots induced teen suicides.
“There is enormous potential to improve people’s lives; there is enormous risk,” Cassidy said.
Stanford bioengineering professor Dr. Russ B. Altman, a witness at the hearing, said AI can change health care in three main ways: augmenting clinical diagnoses, giving providers more time with patients and giving patients more control over their care.
In response to Altman’s testimony, Sen. Susan Collins (R-Me.) broached the subject of whether AI can help struggling rural hospitals, which she said are “teetering on the brink of closure.”
Altman said AI is both an opportunity and a challenge for rural hospitals. He says it could potentially help hospitals provide more specialized care closer to home.
“I think that this is a great opportunity to extend the capabilities of rural hospitals but I think they would tell you right now that they need help in vetting these tools and right now it’s a tsunami that they can’t manage,” Altman said.
He proposed creating an oversight body that lays out best practices, vets tools and formulates guidelines for hospitals, as a solution.
A recurring topic at the hearing was the recent examples of chatbot-associated teen suicides — most notably, at a judiciary hearing last month, Florida mother Megan Garcia alleged that prolonged abuse by AI chatbots on a platform called Character.AI led to her 14-year-old son’s suicide.
Sen. Josh Hawley (R-Mo.) posed the question of how AI can be regulated to prevent the “killing of kids” to witness John Bailey, a senior fellow at the American Enterprise Institute.
“Aren’t we beyond there being a potential problem that maybe we need to have a little further development before they release the models?” Hawley asked. “Doesn’t Congress need to do something now in order to protect kids from being killed by AI systems?”
Bailey said researchers don’t fully understand why chatbots behave in harmful ways and called for human intervention when conversations become dangerous, as well as greater transparency from companies.
Hawley pushed back, stating that he’s beyond the point of transparency and wants companies to “stop killing kids.”
Several lawmakers returned to the issue throughout the hearing, including Sen. Jon Husted (R-Ohio) who directed the committee to a bill he recently introduced: Children Harmed by AI Technology Act of 2025 (CHAT).
The bill requires owners and operators of AI companion chatbots to bar minors from accessing adult content and parental consent for minors to use chatbots. It would also mandate the chatbot immediately inform consenting parents if the conversation with a child includes self-harm or suicidal ideation.
Sen. Ashley Moody (R-Fla.) said that companies should be held liable for their chatbots causing harm or death to users. She also posed the question of how lawmakers can regulate AI.
Both Altman and Bailey said any regulation should begin with clearly defining who is responsible when AI systems cause harm.
“I think we need principles of where does liability fall there as it relates to the developers of AI, the deployers of AI and then how does that relate to both users and then also AI agents,” Bailey said.