Xiong recognized by IEEE with Best Paper Award

Jinjun Xiong (second from left) stands dressed in semi-formal attire with three other men. He holds a plaque on a stage next to a podium.

Jinjun Xiong (second from left) stands on stage accepting the TRPMS best paper award. 

By Peter Murphy

Published February 3, 2025

A paper authored by Jinjun Xiong, SUNY Empire Innovation Professor in Computer Science and Engineering, earned the 2024 Best Paper Award from the International Institute of Electrical and Electronics Engineers (IEEE) Transactions on Radiation and Plasma Medical Sciences (TRPMS) journal. 

Print

The TRPMS Best Paper Award is given annually to authors of the paper deemed best among those published in the journal. All papers published in the journal three years prior to the award year are eligible for consideration.

Authored in 2021, Xiong’s paper, “On Interpretability of Artificial Neural Networks,” explains the importance of studying the foundation of many modern AI systems, i.e., artificial neural networks or deep neural networks (DNN).

Core concepts and principles of DNNs form the foundation of modern AI development. DNNs are used to help large language models, like ChatGPT, make decisions. According to Xiong, modern DNNs have already generated significant impacts on many industries, like autonomous vehicles, education, health care, medicine and more. The paper reviews the need to understand DNNs and how they make decisions, particularly in the high-stakes field of medicine.

“In the early stages of DNNs development, we realized their potential impacts on mission-critical applications, such as medical diagnosis and therapy,” says Xiong, who is also director of the University at Buffalo Institute for Artificial Intelligence and Data Science. “Therefore, we decided to do a systematic survey on this topic for the medical imaging professionals, hence our choice to publish the article with the premier IEEE Transactions on Radiation and Plasma Medical Sciences.”

Jinjun Xiong.

Xiong and co-authors Feng-Lei Fan, research assistant professor at the Chinese University of Hong Kong, Mengzhou Li, PhD student at Rennselaer Polytechnic Institute (RPI) and Ge Wang, Clark and Crossan Endowed Chair Professor and director of the Biomedical Imaging Center at RPI, proposed a simple and comprehensive system that will categorize different concepts, ideas or research approaches, help people understand DNNs, and navigate the many different research papers and ongoing research around the subject. The authors reviewed research that was state-of-the-art at the time and were able to offer clarity on DNNs to medical imaging professionals. Xiong also says that part of the motivation for pursuing this study was to inspire other researchers to either take a more open-minded view of DNNs or focus some of their research on this topic.

Since the paper was published, there have been several changes associated with DNNs.

“The capabilities of DNNs for natural language processing has since made significant progress, especially after the release of ChatGPT 3.5 in 2022,” Xiong says. “The techniques for improving our understanding of DNNs, especially in the context of natural language processing and transformer-based DNNs, have made tremendous progress. Maybe it’s time to write another systematic review on this subject.”

When the researchers first conducted the review and wrote the paper, they were focused on interpretability of DNNs—something Xiong says encompasses various factors that would help humans trust the models more. Different groups of people use the models differently. Researchers and scientists aim to increase their understanding of any complex system and enhance them. According to Xiong, the goal of most technical or scientific pursuit of DNN models is to understand why the model works and where the model won’t work. Trustworthiness for people who do not have the scientific background is something else entirely.

“I define ‘interpretability’ as a process, a process that depends not only on technologies, but also people. They are interdependent. Like any complex system when it was first introduced, people will feel scared and reject its use initially,” Xiong says. “As people understand the complex system more, their acceptance will start to grow—sometimes this understanding has little to do with technologies, but people’s familiarity with the technologies—the more you use them, the more confident you are about them.”