The Rise of Robotic Killing Machines

April 26th, 2015 - by admin

Brooks Mencher / Insight: San Francisco Chronicle – 2015-04-26 23:51:48

http://www.sfchronicle.com/opinion/article/Rise-of-robotic-killing-machines-has-a-cautious-6219854.php

Top 5 Killer Robots of US Military for Future | New 2015
Sai Documentary
(August 11, 2014) — You Also Watch The New Documentary, “Future Invention of Fully Loaded Drones Weapons.”
https://www.youtube.com/watch?v=GxZkIzE0aO4

Rise of Robotic Killing Machines Has a Cautious World Talking
Brooks Mencher / Insight: San Francisco Chronicle

SAN FRANCISCO (April 23, 2015) — They’re called lethal autonomous weapons, or LAWs, and their military mission would be to seek out, identify and kill a human target independent of human control. Human decision would not be in the loop, and the only button a military commander would push would be the “on” button. In military terms, it’s called “fire and forget.”

The United Nations panel of experts would not have assembled for a second time in as many years if the battlefield use of thinking, man-killing robots were not at hand. The panel would not have called for a moratorium last year on what does not yet exist if artificially intelligent war weapons did not loom above us like a thunderhead.

Representatives of 60 nations, groups including Amnesty International, the International Committee of the Red Cross and Human Rights Watch, and scholars from around the world met in Geneva during the third week of April in an attempt to define the level of artificial intelligence needed for an international definition of robotic autonomy.

The conversation soon shifted to defining “meaningful human control,” but neither topic was resolved. The Panel of Experts, under the Convention on Conventional Weapons (CCW), will meet again next year to continue the discussion.

None of the industrial nations admits having a LAW, but there’s really no way to confirm the nonexistence of a weapon that would be classified as secret. If they don’t exist, they are barely a breath away.

The US Department of Defense has had a directive in place for three years that outlines the chain of command that would approve their deployment on a case-by-case basis. It’s called Directive 3000.09. And, on April 15, the third day of the panel meeting, Secretary of the Navy Ray Mabus, citing the breakthrough autonomous identification of a test target by unmanned ground and air vehicles working in tandem, announced the creation of a new office for unmanned warfare systems as well as a new deputy assistant secretary of the Navy to lead it.

The New Cyborg
Contrary to our common fantasy, man-killing robots won’t look like science-fiction androids, robots with human features. They won’t resemble the amazingly complex doll that Swiss watchmaker Pierre Jaquet-Droz built in 1774. (It had been mechanically programmed to write longhand, and is now in the Art and History Museum of Neuchatel, Switzerland.)

They won’t look like Yan Shi’s mythical humanoid robot of 1000 BC, or the fifth century BC flying magpie purportedly made by King-shu Tse, also in China. (However, 21st century technical advancements as minute as android facial expressions and lifelike silicon-polymer skin have noticeably advanced these robotic human replicas. Yet, they are still missing the wetness of the eye, a telltale human trait.)

LAWs will look much like their currently deployed predecessors — semi-autonomous robots that take the shapes of flying wings (drones), tanks and cars (land-based weapons) and pedestal-mounted machine guns (land and water). Advancements in artificial intelligence will gradually push the devices toward full autonomy.

According to Stuart Russell, who addressed the panel, core artificial intelligence abilities like sensory perception and tactical planning are, or soon will be, in place. As for more advanced AI, he pointed out that some researchers see full autonomy as currently feasible while others estimate its arrival in 20 to 30 years. “Humans will be largely defenseless against such systems,” he told the panel.

Russell is a UC Berkeley computer science professor and UCSF neurological surgery professor. “An arms race in this area,” Russell said, upon his return from Geneva, “will end with weapons that render humans and human-controlled systems completely defenseless. Inevitably they will be used in civil wars by dictators, by non-state actors in terrorist attacks, etc.” Also, cognizant lethal machines would reduce the threshold for going to war, he said.

Russell, who has authored numerous technical books on artificial intelligence, predicts few limitations in developing a brain for LAWs. Already, a robot’s ability to recognize human faces and gaits exceeds a person’s ability, and recognizing objects is equal to human ability in 1,000 categories.

But can they fly a drone? Robots are now capable of superhuman precision in aerobatic maneuvers. Can they spot an enemy combatant? That’s not one of the thousand categories.

But housing that brain will have physical limitations, Russell predicted. Whatever the shape of the robot, including insect-size weapons, there will be problems with speed, acceleration, range and payload weights.

A small aerial bot weighing 5 grams would face such physical restrictions, but, he said, even smaller devices in the 1-gram range might be able to selectively kill a chosen human target on contact using a shaped explosive charge. “I’m not sure what countermeasures one might try against a swarm of 5-gram robots, but I’m sure people will come up with something, then there will be counter-countermeasures …”

In short, there would be a LAWs arms race. “If we had a ‘perfect weapon’ that, at the push of a button, wiped out every single enemy combatant, would that be a good idea? What if both sides had this weapon? Who would use it first?” he said.

The Meeting
Tactically and economically, the United States has much to lose in a LAWs ban. In its address to the panel, the US delegation urged rigorous testing and evaluation of proposed robotics, including anti-tamper mechanisms, but cautioned the panel against prematurely establishing battlefield deployment policies such as a ban.

It’s address touted the Defense Department directive as a strategy to avoid “unintended engagements,” one of several negative outcomes of robot wars prognosticated by speakers at the meeting. (Others include the capture of such a robot by the enemy, and, as South Korea representatives noted, the lack of accountability if it was the robot’s judgment to kill was in error.)

The United Kingdom also is reticent to restrict LAWs. The Foreign Office told the (London) Guardian newspaper, in what was the only print coverage during the meeting: “At present, we do not see the need for a prohibition on the use of LAWs, as international humanitarian law already provides sufficient regulation for this area.”

Japan, whose technology industry will surely benefit from advances in the United States’ and other nations’ military use of advanced AI, told the panel that, “Japan, for its part, has no plan to develop robots with humans out of the loop, which may be capable of committing murder.”

Cuba, Ecuador, Ghana and Pakistan called for an outright preemptive ban on the killer robots. Amnesty, the Red Cross and others also called for an outright ban, as did the International Committee for Robot Arms Control.

The Vatican offered a lengthy and logical evaluation of LAWs, warning nations of becoming “slaves of our own inventions.” The Holy See further cautioned that ethical judgment “cannot be put into algorithms.”

It did not call for a proactive ban; however, after enumerating such fears as its concern about loss of human control even in the semi-autonomous devices in use today, it said it could “envision” calling for such a ban not only on the devices themselves but on the costly research to develop such autonomous killers.

Mutually Exclusive Terms
As Professor Patrick Lin of Cal Poly San Luis Obispo pointed out in his analytic address to the panel on April 16, “war is terrible, and all such (war) weapons are terrible, but some are more terrible than others.”

The Geneva discussion as a whole centered on the irony-laced task of conducting a humanitarian war: making it less terrible. Speakers and panel members struggled with the mutually exclusive terms, humanitarian and war. This is nothing new. It is embodied in the CCW’s five protocols, and is at the heart of nuclear war — or deterrence.

The ethics of humanitarian war hinge on reducing civilian suffering. Will lethal autonomous weapons increase the suffering? Of course they will. We are not discussing euthanasia. Yet. Will they, at times, kill in error? Naturally. Is mass murder a possible outcome? Certainly. Is it rational to try to establish a scale of how much suffering is acceptable to the CCW? No, that wouldn’t be rational; it would more aptly apply to an inhuman, or robotic, algorithm.

Will artificial intelligence evolve into a force truly independent of its human creator? That is inevitable, and it’s an inevitability that noted physicist Stephen Hawking, in a 2014 discussion with BBC, found chilling. LAW entry onto the battlefield will be gradual. The killer cyborgs of yesterday’s science fiction will gradually replace their semi-autonomous forerunners — but they will not look like us.

Lethal robots will be made and unleashed in the name of efficiency. They are the ultimate deadly outsource, and they are an outsource for human conscience. Yet they are a practical outgrowth of advanced AI: They will streamline war, feed the economy and remove humans from a battlefield that has grown so grim that 22 American war veterans commit suicide each day.

They will remove a nightmare that, in 2012, drove one active-duty soldier to suicide every other day. In short, LAWs will carry on a proxy war that man is no longer psychologically able to fight.

What on Earth would be the point of that?


Today’s Battlefield

A sample of current military weapons exhibiting various degrees of semi-autonomous lethality:

United States: X-47B flying wing by Northrup Grumman, being tested as part of a proposed fleet of drones on the aircraft carrier Theodore Roosevelt, which was dispatched last week to Yemen after high unrest there;
Long Range Anti-Ship Missiles, use autonomous guidance algorithms for precision strikes; ”
Crusher,” a 14,000-pound off-road Unmanned Ground Combat Vehicle developed at Carnegie Mellon University’s National Robotics Engineering Center for the Defense Department ;
Phalanx, 20mm Gatling gun with a radar-guided anti-missile system

Russia: Skat (translated: Manta Ray), an unmanned combat aerial vehicle (UCAV) shaped like a flying wing under development by aircraft-maker MiG;
MRK-002-BG-57, or
Wolf 2, a tank the size of a car, has its own Facebook page: on.fb.me/1H8tP8S

Israel: Harpy, “fire and forget” autonomous delta-winged missile to destroy radar emitters;
Iron Dome missile/rocket/mortar interceptor system, initiated in 2011 and possibly hacked in 2014

UK: BAE Taranis, flying wing much like the X-47B and Skat, first flight August 2013, project team includes General Electric and Rolls Royce;
Brimstone, “fire and forget” antitank missile, recently reviewed as fully autonomous

France: nEUROn, a flying wing in testing phase for “the high-level algorithms necessary to the development of the automated processes”

China: Sharp Sword, a UCAV flying wing whose maiden flight was November 2013

SKorea: Samsung SGR-AI sentry robot, turret mounted machine gun, has a YouTube promo complete with patriotic music at bit.ly/1bdKc6u

CCW’s five protocols

The 120-nation Convention on Conventional Weapons dates to 1983. Its purview is arms that threaten international humanitarian law, cause undue suffering during war, and injure or kill indiscriminately:Protocol I: Non-detectable fragments (like glass in bombs, can’t be seen with X-rays)

Protocol II: Mines, booby traps and other devices

Protocol III: Incendiary weapons (firebombs, flame, heat, chemical)

Protocol IV: Blinding laser weapons

Protocol V: Explosive remnants of war (unexploded ordnance)

CCW’s Five Protocols

The 120-nation Convention on Conventional Weapons dates to 1983. Its purview is arms that threaten international humanitarian law, cause undue suffering during war, and injure or kill indiscriminately:

Protocol I: Non-detectable fragments (like glass in bombs, can’t be seen with X-rays)

Protocol II: Mines, booby traps and other devices

Protocol III: Incendiary weapons (firebombs, flame, heat, chemical)

Protocol IV: Blinding laser weapons

Protocol V: Explosive remnants of war (unexploded ordnance)

Brooks Mencher is a Chronicle staff writer. To comment, submit your letter to the editor at www.sfgate.com/submissions. Twitter: @theNewsMench

Posted in accordance with Title 17, Section 107, US Code, for noncommercial, educational purposes.