WASHINGTON – Senators and legal experts on Wednesday questioned the integrity of artificial intelligence in analyzing evidence in criminal cases as more police departments and intelligence agencies turn to artificial intelligence for their core functions.
The Miami Police Department, however, has seen a drop in homicides and cleared more murder cases by using technology such as AI, according to Miami’s assistant chief of police, Armando Aguilar. He also attributed the improvements to better community relations and other tools, including license plate readers, facial recognition, video analytics, gunshot detection, and social media threat monitoring.
“The Miami Police Department has successfully leveraged artificial intelligence in the past few years, to great effect,” said Aguilar. Since implementing these changes, Miami has become “safer today than in any other time in our history,” he said.
Artificial intelligence – whether it’s chatbots like ChatGPT or facial recognition on smartphones – has become ubiquitous and should not be feared when used to solve and prevent crimes, some senators said.
“It’s not Robocop, it’s not the Terminator, it’s not the Matrix, not Ultron, it’s not even Wall-E,” according to Sen. Tom Cotton (R-Ark.). “The technology acts as an aid to officers, not a replacement, and can help create a “faster, cheaper, more accurate criminal justice system.”
Still, other experts worry that many tools have not been around long enough to regulate properly.
Since AI cannot be scrutinized or cross-examined in a court of law, these tools may “present troubling obstacles to fair and open proceedings,” said Rebecca Wexler, an assistant law professor at the University of California, Berkeley.
She told lawmakers that many people have been allegedly wrongfully arrested or falsely accused as a result of bad evidence provided to AI programs, One challenge, she noted, is that many AI software companies do not allow for peer reviews, which are standard for products of such high stakes.
Congress must allow AI tools to be independently audited, and software companies should be prohibited from invoking privilege as a way to avoid scrutiny, according to Wexler.
“When academic researchers with no stake in the outcome attempted to actually perform independent research into quality assurance and validation of their product, the company used contract law to stop that from happening,” she said.
Wexler said the lack of transparency raises concerns over possible algorithm flaws and poor analysis by tech companies and can prevent the algorithms from becoming more accurate and less biased.
False conclusions provided by AI systems present a “legitimate concern,” as poor quality evidence can hurt how emerging technologies operate and lead to false arrests and prosecutorial charges, Wexler said.
Sen. Cory Booker (D-N.J.) said he worries about the ongoing surveillance, especially in underprivileged areas, as law enforcement continues to deploy more sophisticated AI technologies. He said that people in poorer areas may “feel like they are under a surveillance state” and are losing their basic privacy rights.
As the technology evolves rapidly and new applications are created, lawmakers are struggling to stay ahead of the curve to regulate AI, he added.
“We have not moved as fast as the innovations around us,” Booker said. “Government has not been able to keep up. ”