Artificial intelligence can help combat the spread of misinformation

Release Date: May 14, 2020

Print
Jun Zhuang head shot.
“The human factor is the most difficult, and it’s very hard to identify people’s true intent. Some people just are spreading information for fun, but some people are trying to hurt others. ”
Jun Zhuang, professor
industrial and systems engineering, University at Buffalo

BUFFALO, N.Y. – When disasters strike, the spread of misinformation online can be especially damaging.

Jun Zhuang, a professor of industrial and systems engineering at the University at Buffalo School of Engineering and Applied Sciences, is creating tools that employ artificial intelligence to help the Federal Emergency Management Agency and other organizations tamp down dangerous rumors and false reporting in times of crisis.

For example, his research lab created a machine learning algorithm that monitors rumors using data extracted from Twitter.

The lab looks at data and misinformation from past disasters to teach the algorithm how to evaluate the veracity of a tweet.  The algorithm evaluates the data, as well as user characteristics — like if the account is verified, a news agency, a famous personality, and the number of retweets — and then it generates an output on whether the user is promoting a piece of false information, says Zhuang, who also works with the Department of Homeland Security on counterterrorism and disaster management issues.

Help for agencies with limited resources

During a disaster, agencies that manage relief efforts must apportion limited human and technological resources judiciously. They have to make decisions about what is a priority and what is not.

The algorithm helps with this by alerting responders if a false rumor is being spread by accounts with large amounts of followers. Agencies can then decide if they want to try to debunk it.

Zhaung’s research shows that about 90% of people will just simply spread information without any type of check.

“But on the other side, our research shows that when people find out something that they have spread is debunked, only 10% of them will make a correction to their original information,” he says.

Next frontier is beyond capacities of AI

Zhuang’s wants to create a tool that agencies could use to track misinformation more broadly across social media platforms, by vacuuming up and analyzing massive amounts of data in real time.

“The idea is that agencies or end users would get a warning sign on any particular message indicating whether it was true or false—or a probability percentage that it could be false—based on the source, the path of information and the history of the user,” Zhuang says.

As Zhuang and his team continue teasing out the thicket of online information flows, it has become clear that the next frontier—understanding the motivations of those who spread misinformation—is beyond the current capacity of AI.

“The human factor is the most difficult, and it’s very hard to identify people’s true intent. Some people just are spreading information for fun, but some people are trying to hurt others,” Zhuang says. “We’re trying to understand why people do A versus B. There's lots of nuance behind that, and using AI tools to understand how people will make real decisions, well that’s the most challenging part.”

Media Contact Information

Cory Nealon
Director of Media Relations
Engineering, Computer Science
Tel: 716-645-4614
cmnealon@buffalo.edu