Release Date: November 15, 2023
BUFFALO, N.Y. — The images are provocative. Joe Biden and Kamala Harris embrace with glee in the White House to celebrate Donald Trump’s indictment in an alleged hush money scheme. A conservative activist has shared them with two million followers on X.
But are the photos real?
The question came to University at Buffalo media forensics expert Siwei Lyu from editors at Agence France-Presse, who were seeking an urgent answer.
Lyu quickly ran the images through a series of detection algorithms and provided the news agency with proof — Harris’ hand appears to have six fingers — that the images were generated by artificial intelligence, otherwise known as a deepfake.
“Every time a reporter asks me about the authenticity of a piece of media and I see that my analysis contributes to the debunking of misinformation and disinformation, that is very rewarding,” says Lyu, PhD, co-director of the UB Center for Information Integrity, which fights online misinformation and disinformation.
Providing such expertise to news media worldwide — recent examples include USA Today, Poynter, Politifact and Reuters — is a weekly occurrence for Lyu, a SUNY Empire Innovation Professor of computer science and engineering, who is also developing tools to help older adults and children spot online deceptions.
His expertise will soon be available to the public, too. In the coming weeks, he and his students plan to launch the DeepFake-o-meter, an open platform where anyone online can submit a potential AI-generated video for analysis.
With next year’s presidential election and the emergence of content-generating AI tools like ChatGPT, Lyu expects his role as a deepfake detective will continue to grow.
The video looks like a typical Buffalo Bills postgame news conference. Star quarterback Josh Allen stands at a microphone and dons his familiar white, No. 17 jersey.
Then he starts speaking.
“I know in a different and more real world, we were beaten,” Allen says. “But in this universe created by artificial intelligence, we won the game.”
The video is a deepfake created by Lyu to educate the public about synthetic media.
Allen’s words were written by ChatGPT, while his voice was created by Prime Voice AI. The lip synching was done using Wav2lip. The video was complimented by a news article about the Bills’ imagined victory, complete with more ChatGPT text and a Stable Diffusion photograph. The story’s alleged author even has an X account, with a profile picture from thispersondoesnotexist.com.
Every one of these tools is free and can be accessed through a web browser.
“So you do not need to know programming and machine learning — all you need is a computer with internet connection,” Lyu says.
The ease of deepfakes underscores their danger, Lyu says. If anyone can make them, they can easily be made to manipulate something more consequential than a football game.
The technology has frequently been used to impersonate politicians, as well as place people’s likeness in pornographic videos without their consent.
To combat AI-generated media, Lyu turns to an unlikely ally: AI.
He and students in his UB Media Forensics Laboratory train machine learning algorithms how to spot AI-generated media by feeding the algorithms tens of thousands of both real and fake images, videos and audio samples. If the algorithm correctly guesses the authenticity, it’s rewarded. If it’s wrong, it’s penalized.
“The internal pattern of the real images will get boosted inside the algorithm's brain,” Lyu says. “Slowly, the algorithm figures out what are the right things to look at in an image, and what are the right ways to look at it.”
Their algorithms might look for infrequent blinking or whether a hand has five fingers. Their latest can even spot irregularities under the surface of an image that the human eye can’t.
The algorithms can make their determination in less than a second, which is beneficial for both the Lyu lab’s workload and journalists’ deadlines. Lyu will typically provide journalists with a brief report explaining how the algorithm reached its conclusion.
Still, the algorithms are not perfect. For example, they flagged the spliced steps in a recent viral deepfake of Sen. Rand Paul wearing a red bathrobe at the U.S. Capitol, but not Paul’s oddly curved thumb.
That’s why Lyu stresses that humans need to be involved in the detection process. He and his students visually inspect every piece of media and include their own observations in their reports to journalists.
“Algorithms are probabilistic — they usually answer with confidence but there’s a range of data in which they cannot make definite answers,” Lyu says. “It should be a collaborative relationship between algorithms and humans.”
The University at Buffalo has been a worldwide leader in artificial intelligence research and education for nearly 50 years. This includes pioneering work creating the world’s first autonomous handwriting recognition system, which the U.S. Postal Service and Royal Mail adopted to save billions of dollars. As New York’s flagship university, that legacy of innovation continues today. UB researchers are committed to using AI for social good, including developing new technology that addresses the shortage of speech-language pathologists in K-12 education, deepfakes, the need for improved medical imaging and more.
Tom Dinki
News Content Manager
Physical sciences, economic development
Tel: 716-645-4584
tfdinki@buffalo.edu
SHARE THE NEWS
Have some news or an event to share with us? Please send it to cse-dept@buffalo.edu.