How we are detecting fraudulent multimedia assets

woman looking at images on multiple screens.

Three CSE faculty members are working to detect what is commonly known these days as “fake media” through the use of Artificial Intelligence and Machine Learning (AI/ML).

Detecting fraudulent multimedia assets

David Doermann and Junsong Yuan are leading a $1.4 million DARPA project to detect and characterize fraudulent multimedia assets based on an automatic analysis of communicative intent.

Prior to joining UB, Doermann developed and led the DARPA MediFor program that attempted to automatically determine, at an internet-scale, if images, audio, and video had been manipulated.  Current challenges require technology that advances beyond individual images and videos to address the semantics of multimedia campaigns.  This analysis includes the attribution of individuals or organizations responsible for the manipulated assets and characterizing whether the material was created for malicious purposes.

UB's expertise in text analysis, video analytics, and information fusion will lead to new tools to combat disinformation campaigns' widespread onslaught.

David Doermann.
Sub-award PI:

David Doermann, SUNY Empire Innovation Professor, Computer Science and Engineering, School of Engineering and Applied Sciences

Junsong Yuan.
Sub-award Co-PI:

Junsong YuanAssociate Professor, Computer Science and Engineering, School of Engineering and Applied Sciences

Uncovering DeepFake videos

Siwei Lyu is part of the team developing the Semantic Information Defender (SID), a highly advanced, scalable, fully automated system that will ingest and assess news articles, social media posts, technical documents, and other media types across a range of cultures. SID then prioritizes the media items containing disinformation with malicious intent, while providing human-level explanations, full provenance to sources, and subtle distinctions of intent leading to the detection of disinformation campaigns.

The Semantic Information Defender team is funded by DARPA and led by Kitware Inc., with participating teams from UB, University of Illiinois at Urbana-Champaign, University of Michigan, Columbia, University of Albany and Eduworks Inc. Lyu leads the UB team, which will focus on the detection, attribution and characterization of AI synthesized media, commonly known as DeepFakes.

Siwei Lyu.
Sub-award PI:

Siwei Lyu, SUNY Empire Innovation Professor, Computer Science and Engineering, School of Engineering and Applied Sciences