Team makes finals in Amazon’s Alexa Prize for artificial intelligence

A team of six Emory computer science students made it to the final round for Amazon’s Alexa Prize Socialbot Grand Challenge, a global competition among universities to create a chatbot that advances the field of artificial intelligence. The winner of the 2021 Alexa Prize will be announced in mid-August. At stake is a $500,000 first prize. In addition, $1 million in research funds will be awarded to the winning team if it meets the “grand challenge” criteria, including the ability of its chatbot to engage the judges in conversation for at least 20 minutes.

In addition to Emory, the finalists are Czech Technical University, Prague; SUNY at Buffalo, New York; Stanford University; and the University of California Santa Cruz.

The Emory team is headed by graduate students Sarah Finch and James Finch, along with faculty advisor Jinho Choi, assistant professor in the Department of Computer Sciences. Last year, the trio headed a team of 14 Emory students that took first place, winning $500,000 for their chatbot named Emora. They chose the name because it sounds like a feminine version of “Emory” and is similar to a Hebrew word for an eloquent sage.

This year, they are turning up the heat with an even more advanced version of Emora and new team members, including graduate student Han He and undergraduates Sophy Huang, Daniil Huryn and Mack Hutsell. All the students are members of Choi’s Natural Language Processing Research Laboratory.

“I’m extremely proud to have such a talented team of students,” Choi says. “It’s a group of strongly motivated people with the right combination of diverse skills coming together at the right time. They’re working on changing the paradigm for conversational artificial intelligence.”

“We’re using some established technology but taking a groundbreaking approach in how we combine and execute dialogue management so a computer can make logical inferences while conversing with a human,” adds Sarah Finch. “Ultimately, we’re making Emora even more flexible in how she can interact with people.”

The annual Alexa Prize, launched in 2016, challenges university students to make breakthroughs in the design of chatbots, also known as socialbots — software apps that simplify interactions between humans and computers by allowing them to talk with one another.

In the runup to the finals, users of Amazon’s chatbot, known as Alexa, volunteered to test out the competing chatbots, which were not identified by their names or universities. A chatbot’s success was gauged by user ratings. Teams used this feedback to keep improving their chatbots.

“The competition drove us to build something that works, based on immediate feedback from actual users,” says James Finch. “That’s forcing us to continuously focus on the right problems and find solutions.”

Sarah and James Finch, who married in 2019, are the ultimate computer power couple. They met at age 10 in a math class in their hometown of Grand Blanc, Michigan. They were dating by high school, bonding over a shared love of computer programming. As undergraduates at the University of Michigan, they worked together on a joint passion for programming robots to speak more naturally with humans.

They chose to come to Emory for graduate school because of Choi, an expert in natural language processing, and Eugene Agichtein, professor in the Department of Computer Science and an expert in information retrieval.

At the start of the Alexa Prize competition, in September, each team received a $250,000 research grant, Alexa-enabled devices and other tools, data and support from Amazon. Emory’s 2021 team had a head start with the winning Emora chatbot created in 2020, by a team of 14 students from the labs of Choi and Agichtein. They deployed the existing Emora, which had already proven to be a good ratings getter, to buy them time to develop a novel framework from scratch.

Emora was designed not just to answer questions, but as a “social companion.” During the height of the COVID-19 pandemic, the chatbot provided comfort and warmth to people interacting with Amazon’s Alexa-enabled devices, whether they wanted to discuss movies, sports and their pets or their concerns for themselves and their families.

The strategy paid off when the 2020 Emory team scored an average user rating of 3.81, beating second-place finisher Stanford University (with a rating of 3.17) to take the first-place Alexa Prize of $500,000. Ultimately, however, none of the competing teams last year earned a composite score of 4.0 from the final judges to win the grand challenge prize of $1 million. A key hurdle to the grand challenge is the requirement that the chatbot engage the judges for at least 20 minutes in most conversations.

Such a lengthy, logical and free-ranging conversation between a computer and a human is a key remaining challenge in the field of artificial intelligence.

Computers learn how to respond to questions or comments from a human by being fed massive amounts of data on possible responses. Nuances, however, are often lost on machines. A logical response to someone telling you they bought a carton of milk, for example, would differ wildly from a logical response to someone telling you they just bought a house.

“Changing a few words in a sentence can give it a vastly different meaning, requiring a completely different reaction from the listener,” Choi says. “The human brain is wonderful because it can explore all the possible nuances of meaning in an instant and come up with an appropriate response. It’s a huge problem to design an algorithm that can do these kinds of subtle, social-linguistic calculations within a couple of seconds.”

While the original Emora was state-of-the-art in 2020, it was still too limited to meet the grand challenge. It was built on a platform the team calls the Emora State Transition Dialogue Manager, a behavioral mathematical model similar to a flowchart and equipped with several natural language processing models. Depending on what people say to the chatbot, the machine makes a choice about what path of a conversation to go down and keeps doing that based on probabilities. While the system is good at chit chat, the longer a conversation goes on, the higher the chances that the system will miss a social-linguistic nuance and the conversation will go off the rails, diverting from the logical thread.

The 2021 Emory team made the bold decision to develop a completely new system for Emora, leveraging team members’ areas of expertise. They based this year’s Emora on three types of frameworks to advance core natural language processing technology, computational predicate-logic structures and probabilistic reasoning for dialogue management.

“To our knowledge, our approach has never been done in the way we are attempting it,” Sarah Finch says.

In a race against time, Sarah and James Finch worked on the engineering of the new Emora system, as well as designing logic structures and implementing related algorithms. Undergraduates Huang, Huryn and Hutsell focused on developing dialogue content and conversational scripts for integrating within the chatbot. Graduate student He focused on computational parsing, including recent advances in the technology, a critical piece for translating unstructured natural language into computational structures that can infer logical outputs.

As the competition nears the final heat, the team members continue to work non-stop, continuously tweaking and improving the system.

“Everyone on the team is extremely dedicated,” Choi says. “We believe that Emora represents a groundbreaking moment for conversational artificial intelligence.”

/Public Release. This material comes from the originating organization/author(s)and may be of a point-in-time nature, edited for clarity, style and length. The views and opinions expressed are those of the author(s).View in full here.