US & China in Race to Build Killer Robots: No One Liable if Future Machines Decide to Kill

January 21st, 2016 - by admin

Aliya Sternstein / Nextgov & Chris Green / The Independent – 2016-01-21 01:34:50

http://cdn.defenseone.com/defenseone/interstitial.html?v=2.1.1&rf=http%3A%2F%2Fwww.defenseone.com%2Ftechnology%2F2016%2F01%2Fus-thinks-china-may-have-stolen-military-robot-designs%2F125168%2F%3Foref%3Ddefenseone_today_nl

The US Thinks China May Have Stolen Military Robot Designs
Aliya Sternstein / Nextgov

(January 15, 2016) — The federal government wants to know if hacked trade secrets are aiding the rise of an army of Chinese androids. US officials have ordered an investigation into whether China might be gaining an unfair competitive advantage in the robotics race.

At least one China-backed cyberspy operation reportedly snared robotics research from QinetiQ, a Pentagon contractor and the supposed inspiration for gadget-maker “Q” in the James Bond movie franchise.

This week, the US-China Economic and Security Review Commission began looking for analysts to write an unclassified report on China’s current industrial and military robotics capabilities, including the origins of those capabilities.

The study will identify know-how and tools that “have likely been acquired by China through technology transfers or cyber penetrations,” according to a Jan. 13 federal business solicitation.

The commission also intends to gauge the chances China’s automation efforts could eclipse comparable Pentagon initiatives, including “Offset,” a Defense Department research initiative meant to “offset” technological advances made by adversaries.

There are concerns China might be gaining an unfair competitive advantage in the robotics race.

Between 2007 and 2009, attackers tied to the People’s Liberation Army allegedly hacked a QinetiQ specialist who worked on embedded software in microchips that control the company’s military robots, Bloomberg reported, citing investigations by security firms Terremark and HBGary. The Chinese military later showcased a bomb disposal robot in April 2012 that resembled QinetiQ’s Dragon Runner.

Now the United States is saying publicly it’s aiming to find out the technical specs of China’s humanoids. The forthcoming report will “identify key suppliers of components and chips,” as well as programming languages used in robotics research and development.

“To what extent do Chinese robotics technologies rely on US or other imported software, components or other technology?” is one question the study will address. In addition, the US government seeks to learn the names of R&D organizations in the Chinese robotics field and locate any ties to the PLA.

Chinese “breakthroughs” in self-driving vehicles, unmanned aircraft and seagoing drones are also an area of US interest. The commission last year noted that, already, Chinese robots are capable of engaging in extraterrestrial war.

While antisatellite systems haven’t been much of a threat since the Cold War, China’s space activities suggest the nation state is tailoring machines to potentially eviscerate US space assets, according to the commission’s 2015 report to Congress.

The Chinese systems consist of “a satellite armed with a weapon,” commission officials said. Once close enough to an American target, the machine can deploy the armament against or “intentionally crash into the target satellite.”

China is “setting a strong foundation for future co-orbital antisatellite systems that could include jammers, robotic arms, kinetic kill vehicles, and lasers,” the report stated. Some of China’s hopes for AI-powered combat are public knowledge.

Chinese state-sponsored news agency Xinhua on Dec. 27, 2015 reported that, at a China civil-military integration conference in Beijing, several military equipment-makers demonstrated to 200 PLA members products that included robots and unmanned reconnaissance aircraft.

“The Chinese government and the PLA have meted out a succession of measures to boost the private sector’s participation in the arms and equipment industry over the past two years,” according to Xinhua.

Deputy Defense Secretary Robert Work recently said he expects to see Chinese or Russian robotic troops orchestrating military operations one day soon.

“We know that China is already investing heavily in robotics and autonomy, and the Russian Chief of General Staff [Valery Vasilevich] Gerasimov recently said that the Russian military is preparing to fight on a roboticized battlefield,” Work told a national security forum on Dec. 14, 2015, according to Defense One.

“And he said, and I quote, ‘In the near future, it is possible that a complete roboticized unit will be created capable of independently conducting military operations.'”

Aliya Sternstein reports on cybersecurity and homeland security systems. She’s covered technology for more than a decade at such publications as National Journal’s Technology Daily, Federal Computer Week and Forbes. Before joining Government Executive, Sternstein covered agriculture and derivatives.


Killer Robots: No One Liable if Future Machines
Decide to Kill, Says Human Rights Watch

Chris Green / The Independent

LONDON (April 9, 2015) — If a soldier pulls a trigger on the battlefield with lethal consequences, then they bear the ultimate responsibility for their actions. But what if the same act was carried out by a robot, with no human involvement?

Under current laws, computer programmers, manufacturers and military personnel would all escape liability for unlawful deaths and injuries caused by fully autonomous weapons, or “killer robots”, a major report has warned.

Machines with the ability to take decisions to kill are no longer the preserve of science fiction films, it argues, pointing out that the technology which could give rise to such weapons is “already in use or development” in countries including the UK and US.

The report, by Human Rights Watch (HRW) and Harvard Law School’s International Human Rights Clinic, comes ahead of a United Nations meeting next week at which the role of autonomous weapons in warfare will be discussed.

While military commanders could be found guilty if they intentionally instructed a killer robot to commit a crime, they would be unlikely to face prosecution if they were able to argue that it had acted of its own volition, the report concluded.

The researchers added that although victims or their families could pursue civil lawsuits against the deadly machine’s manufacturers or operators, this would only entitle them to compensation and would be “no substitute for criminal accountability”.

Bonnie Docherty, a lecturer at the Harvard Law School clinic and the report’s lead author, said: “No accountability means no deterrence of future crimes, no retribution for victims, no social condemnation of the responsible party. The many obstacles to justice for potential victims show why we urgently need to ban fully autonomous weapons.”

Such machines would move beyond existing remote-controlled drones as they would be able to select and engage targets without a human being “in the loop” — raising a variety of serious ethical and legal concerns, the report said.

“Existing mechanisms for legal accountability are ill suited and inadequate to address the unlawful harms fully autonomous weapons might cause,” the authors wrote. “These weapons have the potential to commit criminal acts — unlawful acts that would constitute a crime if done with intent — for which no one could be held responsible.”

The report added that even if a robot’s commander knew it was about to commit a potentially unlawful act, they may be unable to stop it if communications had broken down, if the robot acted too fast, or if reprogramming was only possible by specialists. “In addition, ‘punishing’ the robot after the fact would not make sense,” the authors added.

Campaigners would like the use of such robots to be pre-emptively banned through a new international law. This would have to be written into the UN’s Convention on Conventional Weapons, which in 1995 outlawed the use of laser weapons with the ability to blind people before they could be developed.

In the UK, ministers have said there are no plans for the military to create weapons capable of autonomous killing. But David Mepham, the UK director of HRW, said the next Government “should not hesitate to back a pre-emptive ban on their development, production and use” by other countries around the world.

Professor Noel Sharkey, a leading roboticist at Sheffield University and co-founder of the International Committee on Robot Arms Control, said that if a machine committed a war crime its commander would have “lots of places to hide” to evade justice, such as blaming the software or the manufacturing process.

“If you wanted to use an autonomous robot to commit a war crime, the first thing you’d do is blow it up so nobody could do forensics on it,” he said.

He added that in the US, the latest prototypes involved “swarms” of robotic gun-boats which could be deployed to engage an enemy, communicating with each other to select targets. Although a human would be able to deploy the robots and call them back, they “wouldn’t be controlling the individual kill decisions”, he said.

If a law was passed by the UN, it would have to be very carefully defined so that defensive weapons systems which automatically detect incoming missiles and mortar fire — but did not threaten human life — may still be used, he added.

Thomas Nash, the director of UK-based weapons monitoring organisation Article 36, said the possible creation of killer robots was a “genuine concern” and that unmanned drones with the ability to select their own targets were already in operation. “It’s not a big step from there to devolve the capability for those systems to release a missile based on a pre-programmed algorithm,” he added.

Weapons of the Future?
Taranis

A prototype stealth combat drone, which is said to represent “the pinnacle of UK engineering and aeronautical design”. Able to conduct surveillance, mark targets and carry out air strikes, the RAF and Ministry of Defence stress that there is always a human in control — but Taranis is also capable of “full autonomy”.

SGR-1
Standing for “Sentry Guard Robot”, this fixed-position weapon is capable of tracking and engaging human targets with a mounted grenade launcher or machine gun — once permission is granted by a soldier back at base. It is currently in use in South Korea, where it is used on the border with North Korea.

X47-B
Developed by US defence firm Northrop Grumman, this unmanned combat aircraft has the ability to to take off and land on an aircraft carrier without human intervention.

Although it has a full-sized weapons bay, prototypes tested so far have been unarmed. The current aim is for the drone to be “battlefield ready” by the 2020s.

Posted in accordance with Title 17, Section 107, US Code, for noncommercial, educational purposes.