World’s Leading Scientists and Innovators Warn: Stop Killer Robots!

July 28th, 2015 - by admin

Samuel Gibbs / The Guardian & Editorial / The Guardian – 2015-07-28 23:22:06

http://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons

Musk, Wozniak and Hawking Urge Ban
On Warfare AI and Autonomous Weapons

Samuel Gibbs / The Guardian

(July 27, 2015) — Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”.

The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.

Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue. Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.

“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” said the authors.

Toby Walsh, professor of AI at the University of New South Wales said: “We need to make a decision today that will shape our future and determine whether we follow a path of good. We support the call by a number of different humanitarian organisations for a UN ban on offensive autonomous weapons, similar to the recent ban on blinding lasers.”

Musk and Hawking have warned that AI is “our biggest existential threat” and that the development of full AI could “spell the end of the human race”. But others, including Wozniak have recently changed their minds on AI, with the Apple co-founder saying that robots would be good for humans, making them like the “family pet and taken care of all the time”.

At a UN conference in Geneva in April discussing the future of weaponry, including so-called “killer robots”, the UK opposed a ban on the development of autonomous weapons, despite calls from various pressure groups, including the Campaign to Stop Killer Robots.


The Human Factor:
The Guardian View on Robots as Weapons

Editorial / The Guardian

LONDON (April 13, 2015) — The future is already here, said William Gibson. It’s just not evenly distributed. One area where this is obviously true is the field of lethal autonomous weapon systems, as they are known to specialists — killer robots to the rest of us.

Such machines could roam a battlefield, on the ground or in the air, picking their own targets and then shredding them with cannon fire, or blowing them up with missiles, without any human intervention. And if they were not deployed on a battlefield, they could turn wherever they were in fact deployed into a battlefield, or a place of slaughter.

A conference in Geneva, under the auspices of the UN, is meeting this week to consider ways in which these machines can be brought under legal and ethical control. Optimists reckon that the technology is 20 to 30 years away from completion, but campaigners want it banned well before it is ready for deployment. The obvious question is whether it is not already too late.

A report by Human Rights Watch in 2012 listed a frightening number of almost autonomous and wholly lethal weapons systems deployed around the world, from a German automated system for defending bases in Afghanistan, by detecting and firing back at incoming ordnance, through to a robot deployed by South Korea in the demilitarised zone, which uses sensing equipment to detect humans as far as two miles away as it patrols the frontier, and can then kill them from a very safe distance.

All those systems rely on a human approving the computer’s actions, but at a speed which excludes the possibility of consideration: often there is as little as half a second in which to press or not to press the lethal button.

Half a second is — just — inside the norm of reaction times, but military aircraft are routinely built to be so manoeuvrable that the human nervous system cannot react quickly enough to make the constant corrections necessary to keep them in the air. If the computers go down, so does the plane. The killer cyborg future is already present in such machines.

In some ways, this is an ethical advantage. Machines cannot feel hate, and they cannot lie about the causes of their actions. A programmer might in theory reconstruct the precise sequence of inputs and processes that led a drone to act wrongly and then correct the program. A human war criminal will lie to himself as well as to his interrogators. Humans cannot be programmed out of evil.

Although the slope to killer robots is a slippery one, there is one point we have not reached. No one has yet built weapons systems sufficiently complex that they make their own decisions about when they should be deployed. This may never happen, but it would be unwise to bet that way.

In the financial markets we already see the use of autonomous computer programs whose speed and power can overwhelm a whole economy in minutes. The markets, in that sense, are already amoral. Robots may be autonomous, but they cannot be morally responsible as humans must be. The ambition to control them is as profoundly human as it is right.

Posted in accordance with Title 17, Section 107, US Code, for noncommercial, educational purposes.