By Nicole Capozziello
Published July 12, 2021
Last November, a team from the University at Buffalo entered the highly competitive Alexa Prize Socialbot Grand Challenge.
While the multimillion-dollar competition is in its fourth year, this was the first time UB had participated in the challenge of creating a socialbot “that can converse coherently and engagingly for 20 minutes with humans on a range of current events and popular topics such as entertainment, sports, politics, technology and fashion.”
The members of Team Proto, Sougata Saha and Souvik Das, PhD students in the Department of Computer Science and Engineering, and Elizabeth Soper and Erin Pacquetet, PhD students in the Department of Linguistics, were happy to have the opportunity, not knowing how far they would go in a pool with other esteemed colleagues from around the globe.
Since then, the team, under the direction of Rohini Srihari, professor and associate chair of the Department of Computer Science and Engineering, has drawn on their complementary skills and knowledge throughout the progressive months and levels of the competition. Thus far, the team’s socialbot has been judged by people everywhere, including over 55,000 users during the semifinal phase alone.
Now, nearly a year after their original application was accepted, the team has become one of only five to advance to the finals stage of the competition. They are competing against the University of California Santa Cruz, Czestochowa University of Technology of Poland, Stanford University and Emory University. Stanford and Emory are both previous winners of the competition.
The finals will be conducted over a three-day period later this month. During this time, the socialbots will be evaluated by a panel of experts. While this phase does not specifically stipulate any additional features, the team has been hard at work on some overall enhancements that they think will be help distinguish their socialbot from the others. These include innovations regarding their socialbot’s ability to converse on any topic, the “naturalness” of conversation and reacting to various user behaviors that they have observed throughout testing.
The winning team, which will receive a prize of $500,000, will be announced by the end of July.
“Regardless of the final outcome, it has been a valuable experience, allowing us to learn about user behavior during conversations, as well as what it takes to effectively leverage neural text generators,” says Srihari. “I’m also grateful to have had a multidisciplinary team consisting of computer scientists and linguists and the complementary expertise they bring. This is a strength, and has compensated for us having a relatively small team compared to our competitors.”
We caught up with Srihari to hear all about where the team is now – and where they hope their work will lead them.
Our learning has been in two areas. First, we have learned a lot about how users react to conversational AI systems: what gets them excited and wanting to continue a conversation, what they find less interesting, and what spooks them out. Second, on the technology side, we have learned a lot about effectively leveraging the latest deep learning technology, in particular, neural text generation. We have also learned about the ebb and flow of a conversation and how to algorithmically control this, for example, when to let the user steer the conversation. When we first started, most of our socialbot utterances were generated through knowledge engineering approaches, based on deep domain knowledge of how people converse about topics such as movies or games. Only a small fraction of the Proto utterances were generated through the neural text generation models. Now the situation has flipped, with most of the utterances being generated through neural models. This allows us to converse on a far wider range of topics.
Conversational AI systems are in a nascent stage. Currently we are focused on just the ability to conduct human-like conversations. However, these socialbots have tremendous potential for societal impact. We are particularly interested in conducting purposeful conversations, for example, encouraging people to pursue health-related goals, and of course, in helping to combat disinformation.
Perhaps the fear that AI systems can think and learn completely on their own and thus become harmful to individuals and society. AI systems perform based on (i) the models that scientists design to implement the solutions, and perhaps even more important, (ii) the data that the systems are trained on. Currently, socialbots are limited in their abilities due to the lack of representative training data sets. It’s hard for socialbots to narrate anecdotes about personal experiences, or express emotions. We have a long way to go.
Ideally, we would have liked the UB community to help us test out the socialbot. However, due to the logistics involved, this is difficult since it involves physical proximity to a device used by our team members. Still, all the encouragement from our colleagues and the UB community at large have generated a lot of positive energy for our team!