WASHINGTON — Election experts, including a former head of the Federal Election Commission, warned members of Congress on Wednesday of the dangers of AI-generated content in elections.
Trevor Potter, a Republican who chaired the FEC under President Bill Clinton, said he worries that so-called deepfake content — computer-generated videos of fabricated speech and action that seem realistic — will mislead voters.
“Unchecked, the deceptive use of AI could make it virtually impossible to determine who is truly speaking in a political communication, whether the message being communicated is authentic or even whether something being depicted actually happened,” he said during a Senate hearing. “This could leave voters unable to meaningfully evaluate candidates and candidates unable to convey their desired message to voters, undermining our democracy.”
To combat AI-generated disinformation in elections, Potter recommended Congress pass a law banning the use of AI to engage in electoral fraud and expanding existing rules requiring the disclosure of the use of AI in political content.
He also said Congress should strengthen FEC’s power to protect elections against misrepresenting candidates.
In early September, Sen. Amy Klobuchar (D-Minn.) introduced bipartisan legislation to address the threats of deepfakes and prohibit the distribution of “deceptive AI-generated audio or visual media” related to candidates for federal offices.
Klobuchar, chair of the Senate Rules and Administration Committee, which hosted Wednesday’s hearing, said deepfakes pose a grave threat to voters because of the scale of the misleading content the new technology will enable.
She cited a video of someone who looks like Sen. Elizabeth Warren (D-Mass.) saying people from the opposing political party should not be allowed to vote. The video was created using deepfake technology.
Several states, including Texas, California and Virginia, have already moved legislation regulating the use of AI in elections. A new law took effect in Minnesota on Aug. 1 making it a crime to use deepfake technology to influence elections.
“In our office, we’re trying to be proactive. First, we’re leading with the truth. That means pushing out reliable and accurate information while also standing up to mis- and disinformation quickly,” Minnesota Secretary of State Steve Simon, the state’s top election official, said at the hearing. “Second, we’ve been working with local and federal partners to monitor and respond to inaccuracies that could morph into conspiracy theories on election-related topics.”
He said he supports a federal effort to rein in AI technology because “the impacts of AI will be felt at a national level.”
Republican senators and two experts at the hearing, however, raised concerns that such regulation could violate the First Amendment.
Sen. Bill Hagerty (R-Tenn.) said he recognizes the issues with the emerging AI technology but thinks the solution to those problems should not be regulation that is hastily crafted.
“My point is that Congress and the Biden administration should not engage in heavy-handed regulation with uncertain impacts that I believe pose a great risk to limiting political speech,” Hagerty said. “We shouldn’t immediately indulge the impulse of government to just do something.”
Ari Cohn, the free speech counsel at the technology think tank TechFreedom, also said that even tailored legislation that only regulates AI-generated content poses threats to the First Amendment’s protection of free speech.
“A law prohibiting AI-generated political speech would also sweep an enormous amount of protected and even valuable political discourse under its ambit,” Cohn said.
But others, including Potter, contended that fraudulent speech isn’t protected under the First Amendment.
“The First Amendment goes to my right, our right, to say what we think, even about the government and in campaigns, without being penalized,“ Potter said. “But the whole point of this conversation is you are falsifying the speaker … It is creating this fake speech where the speaker never actually said it.”
Klobuchar’s bipartisan deepfake regulation bill is currently in the Rules and Administration committee. She said she hopes the committee would act on AI regulation before the end of the year.
“With bipartisan cooperation put in place, we will get the guardrails that we need,” Klobuchar said. “We can harness the potential of AI, the great opportunities, while controlling the threats we now see emerging and safeguard our democracy from those who would use this technology to spread disinformation and upend our elections.”