What's secret to making sure AI doesn't steal your job?

Whether it's athletes on a sporting field or celebrities in the jungle, nothing holds our attention like the drama of vying for a single prize. And when it comes to the evolution of artificial intelligence (AI), some of the most captivating moments have also been delivered in nailbiting finishes.

Authors

  • Cecile Paris

    Chief Research Scientist, Knowledge Discovery & Management, CSIRO

  • Andrew Reeson

    Economist, Data61, CSIRO

In 1997, IBM's Deep Blue chess computer was pitted against grandmaster and reigning world champion Garry Kasparov, having lost to him the previous year.

But this time, the AI won. The popular Chinese game Go was next, in 2016, and again there was a collective intake of breath when Google's AI was victorious. These competitions elegantly illustrate what is unique about AI: we can program it to do things we can't do ourselves, such as beat a world champion.

But what if this framing obscures something vital - that human and artificial intelligence are not the same? AI can quickly process vast amounts of data and be trained to execute specific tasks; human intelligence is significantly more creative and adaptive.

The most interesting question is not who will win, but what can people and AI achieve together? Combining both forms of intelligence can provide a better outcome than either can achieve alone.

This is called collaborative intelligence. And this is the premise of CSIRO's new A$12 million Collaborative Intelligence (CINTEL) Future Science Platform, which we are leading.

Checkmate mates

While chess has been used to illustrate AI-human competition, it also provides an example of collaborative intelligence. IBM's Deep Blue beat the world champion, but did not render humans obsolete. Human chess players collaborating with AI have proven superior to both the best AI systems and human players.

And while such "freestyle" chess requires both excellent human skill and AI technology, the best results don't come from simply combining the best AI with the best grandmaster. The process through which they collaborate is crucial.

So for many problems - particularly those that involve complex, variable and hard-to-define contexts - we're likely to get better results if we design AI systems explicitly to work with human partners, and give humans the skills to interpret AI systems.

A simple example of how machines and people are already working together is found in the safety features of modern cars. Lane keep assist technology uses cameras to monitor lane markings and will adjust the steering if the car appears to be drifting out of its lane.

However, if it senses the driver is actively steering away, it will desist so the human remains in charge (and the AI continues to assist in the new lane). This combines the strengths of a computer, such as limitless concentration, with those of the human, such as knowing how to respond to unpredictable events.

There is potential to apply similar approaches to a range of other challenging problems. In cybersecurity settings, humans and computers could work together to identify which of the many threats from cybercriminals are the most urgent.

Similarly, in biodiversity science, collaborative intelligence can be used to make sense of massive numbers of specimens housed in biological collections.

Laying the foundations

We know enough about collaborative intelligence to say it has massive potential, but it's a new field of research - and there are more questions than answers.

Through CSIRO's CINTEL program we will explore how people and machines work and learn together, and how this way of collaborating can improve human work. Specifically, we will address four foundations of collaborative intelligence:

  1. collaborative workflows and processes. Collaborative intelligence requires rethinking workflow and processes, to ensure humans and machines complement each other. We'll also explore how it might help people develop new skills that might be useful across areas of the workforce

  2. situation awareness and understanding intent. Working towards the same goals and ensuring humans understand the current progress of a task

  3. trust. Collaborative intelligence systems will not work without people trusting the machines. We must understand what trust means in different contexts, and how to establish and maintain trust

  4. communication. The better the communication between humans and the machine, the better the collaboration. How do we ensure both understand each other?

Robots reimagined

One of our projects will involve working with the CSIRO-based robotics and autonomous systems team to develop richer human-robot collaboration. Collaborative intelligence will enable humans and robots to respond to changes in real time and make decisions together.

For example, robots are often used to explore environments that might be dangerous for humans, such as in rescue missions. In June, robots were sent to help in search and rescue operations, after a 12-storey condo building collapsed in Surfside, Florida.

Often, these missions are ill-defined, and humans must use their own knowledge and skills (such as reasoning, intuition, adaptation and experience) to identify what the robots should be doing. While developing a true human-robot team may initially be difficult, it's likely to be more effective in the long term for complex missions.

The Conversation

Cecile Paris receives funding from various departments of the Australian Government. She is an Honorary Professor at Macquarie University.

Andrew Reeson has received funding from various departments of the Australian Government and is involved in research collaborations with nbn co and TAFE Queensland.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).