Are we witnessing the rise of a different, adaptive artificial intelligence (AI) that works with humans and supports them with smart decisions? Computer scientist Niao He is investigating how this kind of AI can be theoretically underpinned so that it really does provide benefits.
As a researcher, Niao He has both people and technology in mind. Her inaugural lecture was telling in this regard, when she outlined in a few words the ways in which self-learning computer software is already changing our everyday lives: “We’re now entering a new age of artificial intelligence where we’re constantly amazed at what AI can do and how well it can do it, and also how it hugely impacts our daily life.” In the lecture, the Professor of Computer Science at the ETH Zurich Institute for Machine Learning likened the current state of development of artificial intelligence to the dawn, when the breaking day promises great things and we sense that much work still awaits us.
This image of dawn is a fair depiction of the striking rise that research and development in the field of machine learning and artificial intelligence have experienced in recent years. New mathematical and algorithmic insights, huge increases in hardware performance, freely available AI software modules, and massive amounts of data with which to train artificial intelligence have expanded the potential applications of AI by leaps and bounds.
Today, computers are capable of machine learning using statistical and data-driven methods. They supplement human knowledge by automatically picking out patterns and regularities from huge data sets that are too complex and too voluminous for humans to handle. For example, this is how AI can discover new protein structures, and thus play a part in the development of new medications. Niao He wants to go one step further and develop AI that can do more than recognise patterns. In accordance with the values of the ETH AI Center, where she is a member of the core faculty, she envisages trustworthy and collaborative artificial intelligence that works not in competition with humans but with them and supports them with intelligent, transparent decisions.
AI as a companion and advisor
The long-term goal of her research is to develop adaptable AI that – like humans themselves – adjusts quickly and flexibly to changing environmental conditions and acts as an advisor to humans when they need to make unusually difficult decisions. In doing so, Niao He is also spurred on by her own life experience: after completing her Bachelor’s degree in China, she studied and conducted research in Georgia and Illinois (US) for a good ten years before coming to Switzerland in 2020. With each move, she adapted to a different culture and acquired new skills – in Illinois, for example, she learned to drive, and in Zurich she learned German. “Each time I encountered a completely new environment, I often wished there was AI that could help me make the best possible decisions.”
Niao He’s research focus is on optimisation, automation and intelligent decision-making: her group is investigating which principles can be used to design algorithms – the calculation rules on the basis of which intelligent software is programmed – that are mathematically sound for AI to work reliably at all times and that enable data-driven problem solving and intelligent decision-making. To ensure that AI actually complements rather than replaces the work of humans, Niao He’s team is looking for new AI approaches and alternative machine learning methods. “These days, it’s almost become routine for us to develop intelligent programs that sift through huge amounts of data to solve real-world problems of extremely high complexity,” she explains.
Learning to deal with uncertainty and the unknown
Ultimately, most of today’s AI methods improve the quality of their results by learning what works from large training data sets, thereby becoming more reliable. “In day-to-day operations, though, the problems AI is designed to solve are subject to many uncertainties,” Niao He points out. These uncertainties can be of a technical or human nature. They can concern data and data security, the use of shared platforms, or human bias.
“For artificial intelligence to work reliably in the face of uncertainty and changing conditions, it’s important that we formulate the uncertainties mathematically and integrate them into our learning algorithms. That’s what we’re working on,” Niao He says, adding: “We need AI systems that can make consistent decisions over time , that can learn to deal with uncertainty or unknown environments, and that can adapt to new tasks.” A promising approach that could lead to adaptive AI is reinforcement learning, where an intelligent learning procedure increases the reliability of its results through repeated interaction with the environment. Niao He’s team also extends this approach when data is sparse or human experience is lacking altogether.
Reliability over time creates trust
For Niao He, one thing is clear: “AI should essentially be ‘human-centric’ – that is, it needs to focus on people, express our values and work in a trustworthy way.” She shares the view that values such as trustworthiness, transparency, privacy, fairness, ethics and accountability should be the guiding principles for AI deployment in practice. “Trusted AI is AI that works reliably over a long period of time. Reliability over time creates trust,” Niao He says.
‘When it comes to fairness, trust, and collaboration with AI, it really is essential that different people bring their perspectives to the development of AI.’Niao He
Theory is close to Niao He’s heart. She is aware that talk of trustworthiness is ambiguous: “Trustworthy AI is a very magical word to me because it has already been overloaded with many meanings. For me, trustworthiness is a question of methodology.” Decisively, she adds: “From the bottom up, we should build AIs that are mathematically sound and theoretically guaranteed before they are deployed in practice. I think what’s unfortunately missing in this picture of trustworthy AI is the theoretical aspects of AI.”
Knowing boundaries, increasing diversity
It is comparatively easy these days for computer designers and programmers to try things out on AI platforms. This has led to there being some algorithms in practical use that produce very plausible results; but when an error comes to light, it remains unclear why it occurred. For Niao He, “theory” means understanding AI’s mathematical and algorithmic foundations well enough to explain at any time how an algorithm really works and how it obtains its results. “The point of theoretical understanding is really trying to understand the fundamental limits of these problems and fundamental limits of the algorithms.”
Developing AI in a human-centric way requires counterfactual thinking, Niao He says. What would have happened if I had chosen differently, or if someone of another gender or ethnic group were to choose in my place? Diversity is important for AI research, she says: “Especially when it comes to fairness, trust, and collaboration with AI, it really is essential that different people bring their perspectives to the development of AI. That’s why I’d like to especially encourage female students to participate in AI research to ensure that, in future, AI is collaborative, ethical, fair and trustworthy.”