A new institute, based at Brown and supported by a $20 million National Science Foundation grant, will convene researchers to guide development of a new generation of AI assistants for use in mental and behavioral health.
PROVIDENCE, R.I. [Brown University] - With a $20 million grant from the U.S. National Science Foundation, Brown University researchers will lead an artificial intelligence research institute aimed at developing a new generation of AI assistants capable of trustworthy, sensitive and context-aware interactions with people. Work to develop the advanced assistants is specifically motivated by the potential for use in mental and behavioral health, where trust and safety are of the utmost importance.
The AI Research Institute on Interaction for AI Assistants (ARIA) will combine research on human and machine cognition, with the goal of creating AI systems that are able to interpret a person's unique behavioral needs and provide helpful feedback in real time. To understand what form such systems should take and how they could be safely and responsibly deployed, the institute will bring together experts from across the nation spanning computer science and machine learning, cognitive and behavioral science, law, philosophy and education.
Creating AI systems that can operate safely in a sensitive area like mental health care will require capabilities that extend well beyond those of even today's most advanced chatbots and language models, according to Ellie Pavlick, an associate professor of computer science at Brown who will lead the ARIA collaboration.
"Any AI system that interacts with people, especially who may be in states of distress or other vulnerable situations, needs a strong understanding of the human it's interacting with, along with a deep causal understanding of the world and how the system's own behavior affects that world," Pavlick said. "At the same time, the system needs to be transparent about why it makes the recommendations that it does in order to build trust with the user. Mental health is a high stakes setting that embodies all the hardest problems facing AI today. That's why we're excited to tackle this and figure out what it takes to get these things absolutely right."