WASHINGTON—Artificial Intelligence is being used to spread disinformation through both sophisticated, complex manipulation as well as easily debunked efforts, but government regulation may not be able to stop these attacks, a cybersecurity expert said Wednesday.
“Technology will always move faster than law and regulation,” said Laura Rosenberger, director at the Alliance for Securing Democracy, part of the German Marshall Fund that develops strategies to fight Russian and other countries attempts to influence U.S. elections and otherwise undermine democratic institutions.
Manipulating audio or visual content beyond the ability to easily debunk it is called a deep fake, Rosenberger said. This can make it seem like you’re talking online with another person when you’re not.
Shallow fakes are more easily debunked because they are just the slight manipulation of content to deceive an audience, she explained. The recent doctored footage of CNN reporter Jim Acosta at a press briefing released by the White House is an example. The footage was speeded up to try to deceive viewers.
“Shallow fakes are insidious and can change the discourse because lies travel faster than the truth,” said Rosenberger.
She said people who see altered content, even shallow fakes, are less likely to see the debunking of that content, leading them to believe the phony content.
But experts said it is unclear who is responsible for stopping the spread of this disinformation, the federal government or the private companies that distribute the fakes.
“The government’s role is to identify threats, inform the public and hold the providers of information accountable for knowing who is spreading the dissemination on their platforms,” said former National Security Agency deputy director Richard Ledgett Jr.
But Robert Chesney, a national security professor at the University of Texas School of Law, said that government regulation cannot solve the problem because there is disagreement about which fakes cross a legal line.
“It’s difficult to regulate which fraud crosses the line. Is it political satire? How about political advertising where you make your opponent look a little grainy? Those questions are hard to answer,” Chesney said.
Robyn Caplan, a PhD candidate at the School of Communication and Information Studies at Rutgers University, placed the responsibility squarely at the feet of social media companies and tech giants.
Tech companies need to expand their resources and hire more people to spot and combat the spread of disinformation, have more transparency about techniques they’re implementing to curtail it and have consistent their user agreements so real accounts are not suspended, Caplan said.
U.S. companies and the government have to balance stopping the spread of disinformation and their battle with China in the artificial intelligence arms race.
“AI feeds off data and China has billions of people who have no control over how their data is used so the Chinese government can use that data to better inform AI,” said Aviv Ovadya, founding chief technologist at the University of Michigan’s Center for Social Media Responsibility.
Ovadya said that the U.S. has not set up regulatory frameworks around data yet because it could inhibit the data flow to AI, which would mean the U.S. would fall behind China in its AI development.
Rosenberger agreed that putting up regulatory frameworks to stop data collection in the U.S. would inhibit the American government’s ability to compete with Chinese AI abilities, but also said regulation is imperative for the success of democracy.
“Truth is essential for democracy. If we create scenarios where truth does not exist anymore, then we are creating damage to the fundamentals of democracy,” Rosenberger said.