Artificial intelligence, or AI, has become a household word. Once reserved for research and national security circles, AI now permeates our everyday existence.
In recent forums, multidisciplinary experts from Pacific Northwest National Laboratory (PNNL) joined other international leaders to discuss opportunities, as well as challenges, for the future of neural information processing-the foundation for AI.
Correlation versus causation
The words people use are predictors for their behaviors and actions. Through translation capabilities and AI interfaces, natural language processing (NLP) models can learn to recognize speech patterns and deliver automated, targeted communications to smart devices such as your phone, tv, or social media channels.
Svitlana Volkova led a contingent of PNNL data scientists sharing their latest research at the third annual West Coast Natural Language Processing Summit in October 2020. The one-day virtual event convened experts across government, academia, and industry to discuss the latest advances in NLP technologies and foster new collaborations.
Volkova, who specializes in social media analytics and computational linguistics, joined a panel of experts-including the head of Amazon Alexa, researchers from Facebook AI and Apple, and a professor from Carnegie Mellon University-to discuss the state of NLP research, including during and after the COVID-19 outbreak.
While each panelist approached the topic from different perspectives, “we all converged on the same page,” said Volkova. “We are frustrated with how NLP and AI failed to respond to the COVID-19 crisis.”
“The models have no reasoning, no understanding,” said Volkova. “And they are not easily scalable and adaptive to support real-world applications. They don’t really work in the wild.”
NLP models traditionally rely on pattern matching. For example, Volkova explained, models know specific terms are associated with propaganda. But despite months or years of development and massive amounts of training data, “some models still can’t reason why something is propaganda in this context but not that context,” said Volkova.
Similarly, existing models used to extract information about disease symptoms, progression, and treatments from research publications were not able to rapidly adapt to help address the COVID crisis.
Moving beyond correlation to causation will help address these limitations. Volkova said the NLP community is refocusing efforts on how to include both ethics and reasoning into the models.
Aside from this challenge, Volkova said the NLP community not only adapted but was very productive in the face of the pandemic. She also noted more diverse and inclusive meetings than in past years, with creative new presentation options. Case in point, during the Summit’s new video poster session, Volkova’s colleagues Kayla Duskin, Emily Saldanha, Maria Glenski, and Ellyn Ayton presented their latest linguistics modeling research related to phishing and deception.
More than words
Expanding on NLP, another cohort of PNNL researchers virtually joined the 34th Conference on Neural Information Processing Systems (NeurIPS) in December. NeuriPS is the world’s top forum for the exchange of information on AI advances for biology, technology, mathematics, and physics.
“NeurIPS is extraordinarily competitive,” said Court Corley, who leads PNNL’s Data Sciences and Analytics Group. “Because PNNL is a multidisciplinary laboratory, we have built-in advantages for tackling complex scientific challenges at scale with AI, which is as important as computing for most areas of science.”
During NeurIPS, nearly three dozen PNNL researchers on nine different teams shared their results in the following four key areas of AI research touching on nanoscale to exascale applications for national security, energy, and the environment:
- AI from limited data, called “Few-shot Learning,” led by Aaron Tuor
- Insights from open data, called “Content Intelligence” and causal reasoning, led by Volkova
- Physical science and theory in AI models, or “Domain Aware AI,” led by Jenna Pope
- Safety, security, and interpretability, or “Assurance,” with multiple research teams pursuing this line of inquiry.
The week-long NeurIPS conference was virtually co-located with the 15th Women in Machine Learning (WiML) workshop. This annual one-day workshop gives female faculty, research scientists, and graduate students in the machine learning community an opportunity to meet, network, and exchange ideas; participate in career-focused panel discussions with senior women in industry and academia; and learn from each other. Volkova, Duskin, Ayton, and Glenski participated in the high-demand workshop.