AI expert David Doermann tells Congress ‘deepfakes’ threaten elections

David Doermann, seated at a table, testifying before Congress.

David Doermann, director of the University at Buffalo Artificial Intelligence Institute, testifying before Congress on June 13.

Release Date: July 19, 2019

Print
“I want to make it clear, however, that combating synthetic and manipulated media at scale is not just a technical challenge. It’s a social one as well. ”
David Doermann, director
UB Artificial Intelligence Institute

WASHINGTON, D.C. — University at Buffalo artificial intelligence (AI) expert David Doermann testified before Congress on June 13, outlining the national security challenges posed by manipulated forms of digital media.

The hearing, before the House Intelligence Committee, focused on deepfakes, which are manipulated videos and other digital content produced by AI that yield seemingly realistic but ultimately fabricated images and sounds.

“For more than five centuries, authors have used variations of the phrase ‘Seeing is believing.’ But in just the past half-decade we’ve come to realize that’s no longer always true,” said Doermann, PhD, director of UB’s Artificial Intelligence Institute.

Doermann, who also serves as an Empire Innovation Professor in the Department of Computer Science and Engineering, previously oversaw a Defense Advanced Research Projects Agency (DARPA) effort to combat evolving image and video manipulation technology. At the hearing, he said deepfakes and other manipulated digital content could be used to influence upcoming elections and spread false information.

Here are some highlights from his testimony:

Long-term outlook for stopping deepfakes

“Let there be no question that this is a race. The better manipulators get, the better detectors need to be. And there’s certainly orders of magnitude more manipulators than there are detectors. It’s also a race that may never end. It may never be won. But it’s one where we must close the gap and make it less attractive — financially, socially, politically — to propagate false information.”

Social media companies should be held accountable

“I want to make it clear, however, that combating synthetic and manipulated media at scale is not just a technical challenge. It’s a social one as well.”

In later testimony, Doermann said:

“We need to continue to put pressure on social media to realize that the way their platforms are being misused is unacceptable. They must do all they can to address today’s issues and not allow things to get worse.”

In later testimony, Doermann also said:

“The truth of the matter is people who share this stuff are part of the problem, even though they don’t know it.”

Warning labels for false information

“Even if we don’t take down or prevent manipulated media from appearing, we should provide appropriate warning labels that suggest that this is not real or not authentic or not what it’s purported to be.”

No easy solution

“There’s no easy solution. And it’s likely to get much worse before it gets much better.”

Tools behind deepfakes are not malicious

“There’s nothing fundamentally wrong or evil about the underlying technology that gave rise to the concerns that we are testifying about today. Like basic image and video desktop editors, deepfakes [are] only a tool. And there are a lot more positive uses of generative networks than there are negative ones.”

Put tools in the hands of the press, public

“We need to get the tools and processes in the hands of individuals rather than relying completely on the government or on social media platforms to police content. If individuals can perform a sniff test, and the media smells of misuse, they should have ways to verify it, or prove it, or easily report it. The same tools should be available to the press, to social media sites, to anyone who shares and uses this content.”

Media Contact Information

Cory Nealon
Director of Media Relations
Engineering, Computer Science
Tel: 716-645-4614
cmnealon@buffalo.edu