WASHINGTON – Christine Huberty provides legal assistance to those facing health care coverage denial. On May 25, one of her clients, Jim, was hospitalized for pneumonia. He had been receiving chemotherapy treatment and had a history of chronic obstructive pulmonary disease, known as COPD. 

Doctors recommended 30 days of short-term rehab. Artificial Intelligence thought differently.

His insurance company, Security Health Plan, relied on AI when determining that Jim should only spend half the time at the rehab facility recommended by his doctors. Faced with $3,600 in out-of-pocket costs if he stayed the full length of time as advised by his doctors, Jim discharged early. 

Huberty, a supervising attorney with the Greater Wisconsin Agency on Aging Resources, said the insurance company argued that the “discharge aid is used as a guide only” and that humans were responsible for all final denial decisions.

She said Jim’s family had to take off work due to his lack of strength and risk of falling. “He could not find the three stairs necessary to get into his house,” she said. 

The family appealed the insurance company’s decision twice and were ultimately successful in overturning the insurance company’s decision. 

“The algorithm got it wrong,” Huberty said.

The testimony came during a hearing held by the Senate Primary Health and Retirement Security subcommittee on policy considerations for artificial intelligence in health care on Wednesday.

Huberty told lawmakers her Wisconsin agency has seen insurance denials increase to 1-2 per week. She said insurance companies “bank on patients not appealing” or in many cases elderly clients dying during the appeal process. 

Christine Huberty, Supervising Attorney, Greater Wisconsin Agency on Aging Resources, speaks at a Senate subcommittee hearing on artificial intelligence in health care, Wednesday, Nov. 8, 2023. (Luis Castaneda/MNS)

Huberty said she understood AI can’t be eliminated from the health care system,  but called for more transparency so that patients can see “its moving parts.”

Dr. Thomas Inglesby phrased it as a need to “look under the hood” and understand the process by which AI draws its conclusions.

Inglesby, director of the Johns Hopkins Center for Health Security also called on Congress to give Health and Human Services the authority to regulate the purchases of synthetic nucleic acids in the U.S., building on  President Joe Biden’s AI executive order which covered federal entities.

He also said the U.S. should advocate for strong international standards in biosecurity, beginning with commissioning a risk assessment to identify if current standards properly address existing biological risks. 

As doctors across the country have been burdened with documentation, Dr. Keith Sale argued that AI provides him a tool to “get through notes faster”  with “more detail and information.”

Dr. Keith Sale, Vice President and Chief Physician Executive of Ambulatory Services at The University of Kansas Health System, speaks at a Senate subcommittee hearing on artificial intelligence in health care, Wednesday, Nov. 8, 2023. (Luis Castaneda/MNS)

Sale, Vice President and Chief Physician Executive of Ambulatory Services at The University of Kansas Health System, expressed concern with ensuring AI technology protects privacy laws like HIPAA guidelines, adding that doctors should have the ability to guide the machine’s input and “what it consumes to drive decisions” that physicians use in their decision-making.

“Ultimately (it’s) designed to enhance my practice, not replace me in practice,” he said.

Dr. Kenneth Mandl, Director of Computational Health Informatics Program, Boston Children’s Hospital, warned the senators of the potential bias that AI technologies may have.

“What if the drug company whispered in the ear of our electronic health record, nudging that AI to favor their pills over our competitors?” Mandle said.

He called for more data sharing between different organizations working with the technology to combat biases in AI models.

Senator Ben Ray Lujan (D-N.M.) cited a study by the American Medical Association which saw that AI models being trained against diagnostic decision by doctors was conducted in only three states: California, New York and Massachusetts. 

Dr. Mandle explained that a procedure of training AI, where models receive human feedback on decisions – known as reinforcement learning – is vulnerable to biases when feedback doesn’t come from diverse sources.

Huberty said patients in rural areas have had to drive “hundreds of miles,” because many facilities refuse to take insurance relying on AI-based predictive technologies.

Sen. Ed Markey (D-Mass.) pushed for the Biosecurity Risk Assessment Act and the Securing Gene Synthesis Act and called for legislation addressing a “Privacy Bill of Rights.”

“We don’t need big tech, treating our healthcare system like a lab to experiment on patients,” Sen. Markey continued, “Consumers need a healthcare system that prioritizes people over algorithms.”