Note: This article includes mentions of suicide and self-harm.

WASHINGTON – The House Energy and Commerce Subcommittee on Oversight and Investigations debated the mental health impacts associated with use of AI chatbots on Tuesday, calling into question the technology’s responses to users experiencing mental health crises and suicidal ideation. 

The emerging technology has come under fire in recent months after high-profile lawsuits were filed against OpenAI and other AI companies. In September, plaintiffs Matthew and Maria Raine testified before the Senate Judiciary Committee after their 16-year-old son Adam took his own life following extended conversations with ChatGPT. Matthew Raine claimed that the chatbot offered to write his son’s suicide note. After a prolonged virtual relationship with a Character.AI chatbot, 14-year-old Sewell Setzer III died by suicide in early 2024. Character.AI is trained to emulate various personas, including therapists, friends, and, in Setzer’s case, fictional characters. His mother, Megan Garcia, filed a lawsuit later that year. 

Those suicides are among an exhaustive list of violent incidents associated with AI chatbots, which have come into national scrutiny as teenagers increasingly utilize AI technology.

Ranking member Yvette Clarke (D-N.Y.) said that AI chatbots are designed to emulate human conversations, creating a dynamic where young people develop unhealthy and addictive relationships with the technology. 

“In the past few years, we’ve seen that users, especially younger users, are finding themselves deeply dependent on these bots and even struggling with differentiating between human relationships and what they perceive to have with the Chatbot,” she said. 

New York-based psychiatrist Marlynn Wei said that younger teens are more vulnerable to developing an overreliance on AI. Nearly 3 in 4 teens have used AI companions, according to a July report from Common Sense Media. According to another study, 1 in 8 teens use generative AI tools for mental health support. 

Wei also sounded the alarm on individuals using AI chatbots as a tool for mental health support, arguing that they were not originally intended to function as therapy bots.  

“These systems are a useful sounding board and nonjudgmental space for many people. However, individuals who are isolated, vulnerable or less familiar with AI and its limitations may face greater risks,” she said. 

According to Jonathan Cantor, Professor of Policy Analysis at RAND School of Public Policy and an expert on mental health, further exploration is necessary to uncover why teens are drawn to these platforms.

“I think that if we’re finding one in eight kids aren’t using these services, we really need to understand why,” he said.

He added that much of the existing research on generative AI has been tailored to corporate usage, rather than casual use among young people and everyday individuals.

“We’re collecting really good data on AI used by firms and businesses, but we aren’t doing the same for kids, adolescents and adults and what they’re using [chatbots] for,” he said. 

This summer, the GOP attempted to include a provision banning states from regulating AI development for a decade in the Republicans’ One Big Beautiful Bill, but it was removed from the bill after public pushback. 

In a press release last week, Committee Chairman Brett Guthrie (R-Ky.) and Congressman John Joyce, M.D. (R-Pa.) expressed concern about recent stories highlighting dangerous interactions between chatbots and users, referencing stories about suicide and violence associated with the chatbots. 

“The rapid advancement of AI holds tremendous promise for the future, but recent stories about AI chatbots’ interactions with users have raised serious concerns about the potential impact of chatbot use on the health and wellbeing of those who engage with these platforms,” they said. 

In his opening remarks on Tuesday, Joyce said that the technology can bear negative outcomes when unregulated. 

“Without the proper safeguards in place, these chatbot relationships can turn out to be disastrous,” he said. 

Critics of chatbot technology also highlight the industry’s lack of transparency in disclosing how user data is filed. 

Jennifer King, private data policy fellow at Stanford, told the committee that the use of AI technology for mental health inquiries has the potential to compromise user data. 

“The agreeable nature of chatbot interactions can encourage users to disclose in-depth personal details about physical or mental health concerns. This is concerning because large platforms are already contemplating how to monetize this data,” she said. 

According to Forbes, social media companies currently monetize user data to train AI models, licensing said data to outside businesses. 

King said that AI companies are not transparent with their users on how their data is handled.  

“Companies are not being clear about what measures they are taking to protect consumer data at this point,” she said. “As we interact with these chatbots, the concern again is that we are disclosing far more personal information in these exchanges than we may have in web search,” she added.

Wei voiced concern over “AI psychosis”— a term coined to describe delusions and paranoia some users have experienced after using AI chatbots.  

“It’s not a clinical diagnosis at this point, it was reported cases in the media of adults and teens who start to have a break from reality while using AI chatbots, and we don’t really know whether AI chatbots are causing this or that they just happen to be worsening, fanning the flames of psychosis,” she said.

Cantor said AI companies are attempting to find ways to boost transparency around mental health issues associated with the chatbots.

“I think that the firms themselves are doing as much as they can to try and understand this problem and to provide transparency to some degree about how they’re responding,” he said. 

Lawmakers also weighed whether prioritizing innovation could coexist with protecting children.

Rep. Erin Houchin (R-Ind.) emphasized that limits should be implemented on AI tools that young people have access to, adding that parents should have “confidence” over their children’s AI usage.

“Innovation can only succeed when families have confidence in the tools their kids are using. We can support American innovation and still put common sense guardrails in place to protect children,” she said. 

Rep. Alexandria Ocasio-Cortez (D-N.Y.) said she was concerned about the industry’s monetization of sensitive data.

She also said that the high value of AI companies conflicts with the protection of users, citing a Financial Times study that found that AI companies account for 80% of stock gains this year. She also highlighted how economic incentives for the industry to create high profits motivate companies to compromise their users’ privacy rights and encourage an “unethical” relationship with AI.

“When we talk about why AI models are going to such extremes and some of the extreme outcomes that we are seeing in terms of how people are using AI in terms of emotional companionship, to extreme cases of suicidality and psychosis, this also tracks with the pressure these companies have to turn a profit that they have not yet proven. 

“People’s deepest fears, secrets, emotional content, relationships, can all be mined for this empty promise that we’re getting from these companies to turn a profit,” she added.

Lexi Newsom contributed reporting.