Google working on 'kill switch' to prevent Terminator-style robot apocalypse

Fears about the power of maniacal machines have been growing in recent years. Now Google is trying to stop them

Google is working on a 'kill switch' to stop intelligent robots turning on their human masters.

Fears about the power of maniacal machines have been growing in recent years, with both tech pioneer Elon Musk and Professor Stephen Hawking warning of the dreadful possibility of a Terminator-style war between humanity and our super-smart silicon creations .

Now the search engine giant has published a paper outlining the work its British artificial intelligence (AI) team Deep Mind team is doing to ensure humanity is not swept away by a metallic fist.

Deep Mind develops algorithms to allow robots to learn for themselves directly from raw experience or data.

The team is now developing a way to stop AI from learning how to prevent humans from stopping an activity - for example, firing a nuke - a process called 'safe interruptibility'.

Rex The headquarters campus of Google, Mountain View
Google is working on 'safe interruptibility'. File picture
Rex Terminator
In the 'Terminator' films, humanity has been overrun by an army of powerful androids

"Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not necessarily receive rewards for this," the researchers write.

Last month a pair of academics sketched out a terrifying vision of the future in which "super intelligent" machines declare war on humanity and then set about wiping us off the planet.

Tactical combat robot packs a GLOCK PISTOL and can be controlled remotely

Video loading

Watch next

Watch this video again

Watch Next

    Click to play
    The Live Event you are trying to watch is either unavailable or has not started Please refresh this page in your browser to reload this live event video

    Roman V. Yampolskiy of the University of Louisville and the independent researcher Federico Pistono said computers could commit "specicide" by obliterating humanity.

    This could be done by blowing up atomic power plants, nuking us with our most powerful weapons, taking control of all the world's military drones or performing some other dastardly act of destruction.

    Asimov's laws of robotics

    In 1942, science fiction writer wrote The Three Laws of Robotics, a set of rules to prevent robots harming humanity.

    They are:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Ivan the TERMINATOR! Russia unveils new robot super soldiers

    Video loading

    Watch next

    Watch this video again

    Watch Next

      Click to play
      The Live Event you are trying to watch is either unavailable or has not started Please refresh this page in your browser to reload this live event video

      (Source: Mirror)