x
Breaking News
More () »

AI and Elections: What can you trust?

It may look and sound real, but artificial intelligence is revealing the need for greater restrictions and laws around election content.

WASHINGTON — Artificial Intelligence is making it harder for us to know what to trust in this election cycle.

According to the FCC, three-quarters of Americans say they are concerned about misleading political content generated by artificial intelligence. We talked with a renowned AI researcher calling for greater restrictions and laws to help sort out what's real from what's not.

In one example, images circulating on social media were altered to make it appear like Taylor Swift was endorsing Donald Trump for President.

In another, a message shared on X by Elon Musk went viral with a message altered to sound like the voice of Vice President Kamala Harris.

They both have ONE thing in common.

“You absolutely see deep fakes or cheap fakes being deployed to cause confusion,” said Dr. Joy Boulamwini, an AI researcher. “This is why biometric rights are important.”

Boulamwini is a noted AI expert, best-selling author of “Unmasking AI: My Mission to Protect What Human in a World of Machines, and founder of the Algorithmic Justice League. She says greater protection around using our voices or images - would make it harder for them to be manipulated.

“We’re also seeing the technology continue to improve where it’s harder and harder to distinguish what’s real and what’s fake.”

Credit: Lesli Foster
Joy Boulamwini speaks with WUSA9's Lesli Foster on her AI research.
Credit: Lesli Foster
Joy Boulamwini speaks with WUSA9's Lesli Foster in front of her book, Unmasking AI.

During January’s New Hampshire primary, thousands of voters received a robocall that mimicked President Joe Biden’s voice. It falsely suggested their vote would not matter and urged Democrats not to vote.

A democratic consultant — working for a rival candidate — commissioned the AI-generated call. There are now restrictions on robocalls.

In the last two weeks, the FCC has moved forward with a proposal to require disclosures of AI-generated content in political ads.

But Boulamwini says this first step doesn’t quite go far enough. She insists it’s not enough to sound the alarm about bad content. The next step is to also disclose that the content is harmful.

“This is why we actually need comprehensive laws and legislation that give real restrictions about the use of other people’s likeness, particularly for something as high stakes as elections.”

Until then, as we get closer to Election Day, she’s urging all of us to be on guard with what we see and hear in the race for the White House.

Joy wants the government to pass a Bill of Rights to protect our biometric data — and ban the use of these manipulated ads going forward.

The FCC says half the states in the U.S. have laws to regulate the use of AI and deepfake technology in elections. Their proposal would bring some uniformity.

But ultimately, it’s the change in laws for the entire nation that would really impact this issue. That’s why advocates like Joy are pushing for more comprehensive laws moving forward and consequences.

Federal regulators said a few weeks ago, that the company behind the deceptive New Hampshire robocall later agreed to pay a $1,000,000 fine.

   

Before You Leave, Check This Out