April 22, 2016 Rasha Abdul Rahim / Amnesty International & Ray Acheson / Reaching Critical Will
In April, the third Convention on Conventional Weapons met to address growing alarm over the use of drones and the development of autonomous robot killing machines. The idea of machines killing humans on the basis of algorithms is cynically abhorrent. While the US insists that its drones provide "enhanced situational awareness," independent reviews reveal that, during one five-month stretch, "90% of people killed by US drone strikes were unintended targets."
Ten Reasons Why It's Time to
Get Serious about Banning 'Killer Robots' Rasha Abdul Rahim / Amnesty International
LONDON (April 16, 2016) -- 1. "Killer Robots" will not be a thing of science fiction for long. "Killer robots," are weapons systems that, once activated, can select, attack, kill, and injure human targets without a person in control. Once the stuff of dystopian science fiction, these weapons -- also known as "autonomous weapon systems" (AWS) -- will soon become fact.
Many precursors to this technology already exist.
The Vanguard Defense Industries' ShadowHawk drone, for example, can be armed with a grenade launcher, a shotgun with laser designator, or less-lethal weapons such as a Taser or beanbag round launcher.
In 2011, the office of the Sheriff in Montgomery County, Texas, purchased an unarmed ShadowHawk with a grant from the Department of Homeland Security.
In August 2015, North Dakota became the first US state to legalize the use of drones that can be used remotely to incapacitate people with high-voltage electric shocks.
2. They will be openly used for repression by unaccountable governments. Some governments argue that AWS could reduce the risks of deploying soldiers to the battlefield, or police on dangerous law enforcement operations.
Their use would therefore make it easier for governments to enter new armed conflicts and use force in, for example, policing of protests. Though soldiers and police might be safer, this lowered threshold could lead to more conflict and use of force and, consequently, more risk to civilians.
Proponents of AWS also argue that their lack of emotion would eliminate negative human qualities such as fear, vengeance, rage, and human error. However, human emotions can sometimes act as an important check on killing or injuring civilians, and robots could easily be programmed to carry out indiscriminate or arbitrary attacks on humans, even on a mass scale. AWS would be incapable of refusing orders, which at times can save lives.
3. They would not comply with human rights law and international policing standards International policing standards prohibit the use of firearms except in defence against an imminent threat of death or serious injury, and force can only be used to the minimum extent necessary.
It is very difficult to imagine a machine substituting human judgment where there is an immediate and direct risk that a person is about to kill another person, and then using appropriate force to the minimum extent necessary to stop the attack.
Yet such a judgement is critically important to any decision by an officer to use a weapon. In most situations police are required by UN standards to first use nonviolent means, such as persuasion, negotiation, and de-escalation, before resorting to any form of force.
Effective policing is much more than just using force; it requires the uniquely human skills of empathy and negation, and an ability to assess and respond to often dynamic and unpredictable situations. These skills cannot be boiled down to mere algorithms. They require assessments of ever-evolving situations and of how best to lawfully protect the right to life and physical integrity that machines are simply incapable of.
Decisions by law enforcement officers to use minimum force in specific situations require direct human judgment about the nature of the threat and meaningful control over any weapon. Put simply, such life and death decisions must never be delegated to machines.
4. They would not comply with the rules of war. Distinction, proportionality, and precaution are the three pillars of international humanitarian law, the laws of war. Armed forces must distinguish between combatants and noncombatants; civilian causalities and damage to civilian buildings must not be excessive in relation to the expected military gain; and all sides must take reasonable precautions to protect civilians.
All of this, clearly, requires human judgment. Robots lack the ability to analyse the intentions behind people's actions, or make complex decisions about the proportionality or necessity of an attack. Not to mention the need for compassion and empathy for civilians caught up in war.
5. There would be a huge accountability gap for their use If a robot did act unlawfully how could it be brought to justice? Those involved in its programming, manufacture, and deployment, as well as superior officers and political leaders could be held accountable.
However, it would be impossible for any of these actors to reasonably foresee how an AWS would react in any given circumstance, potentially creating an accountability vacuum.
Already, investigations into unlawful killings through drone strikes are rare, and accountability even rarer. In its report on US drone strikes in Pakistan, Amnesty International exposed the secrecy surrounding the US administration's use of drones to kill people and its refusal to explain the international legal basis for individual attacks, raising concerns that strikes in Pakistani Tribal Areas may have also violated human rights.
Ensuring accountability for drone strikes has proven difficult enough, but with the extra layer of distance in both the targeting and killing decisions that AWS would involve, we are only likely to see an increase in unlawful killings and injuries, both on the battlefield and in policing operations.
6. The development of "Killer Robots" will spark another arms race. China, Israel, Russia, South Korea, the UK, and the USA, are among several states currently developing systems to give machines greater autonomy in combat.
Companies in a number of countries have already developed semiautonomous robotic weapons that can fire tear gas, rubber bullets and electric shock stun darts in law enforcement operations.
The past history of weapons development suggests it is only a matter of time before this could spark another hitech arms race, with states seeking to develop and acquire these systems, causing them to proliferate widely. They would eventually end up in the hands of non-state actors, including armed opposition groups and criminal gangs.
7. Allowing machines to kill or use force is an assault on human dignity. Allowing robots to have power over life-and-death decisions crosses a fundamental moral line. They lack emotion, empathy and compassion, and their use would violate the human rights to life and dignity. Using machines to kill humans is the ultimate indignity.
8. If "Killer Robots" are ever deployed, it would be near impossible to stop them. As the increasing and unchecked use of drones has demonstrated, once weapons systems enter into use, it is incredibly difficult or near impossible to regulate or even curb their use.
The "Drone Papers" recently published by The Intercept, if confirmed, paint an alarming picture of the lethal US drones programme. According to the documents, during one five-month stretch, 90% of people killed by US drone strikes were unintended targets, underscoring the US administration's longstanding failure to bring transparency to the drones programme. It appears too late to abolish the use of weaponized drones, yet their use must be drastically restricted to save civilian lives.
AWS would greatly amplify the risk of unlawful killings. That is why such robots must be preemptively banned. Taking a "wait and see" approach could lead to further investment in the development and rapid proliferation of these systems.
9. Thousands of robotics experts have called for "Killer Robots" to be banned. In July 2015, some of the world's leading artificial intelligence researchers, scientists, and related professionals signed an open letter calling for an outright ban on AWS.
So far, the letter has gathered 20,806 signatures, including more than 14 current and past presidents of artificial intelligence and robotics organizations and professional associations. Notable signatories include Google DeepMind chief executive Demis Hassabis, Tesla CEO Elon Musk, Apple cofounder Steve Wozniak, Skype cofounder Jaan Tallin, and Professor Stephen Hawking.
If thousands of scientific and legal experts are so concerned about the development and potential use of AWS and agree with the Campaign to Stop Killer Robots that they need to be banned, what are governments waiting for?
10. There has been a lot of talk but little progress in three years. Ever since the problems posed by AWS were first brought to light in April 2013, the only substantial international discussions on this issue have been three weeklong informal experts meetings at the CCW. It is disappointing that so little time has been devoted to so serious an issue, and so far little progress has been made.
The Campaign to Stop Killer Robots is calling on states to establish a Group of Governmental Experts or "GGE" that can begin formal negotiations in 2017 on a new CCW protocol on lethal autonomous weapons systems. A more substantive and outcome-oriented mandate would demonstrate progress and the relevance of the CCW in responding to increasing concerns.
For Amnesty International and its partners in the Campaign to Stop Killer Robots, a preemptive prohibition on the development, deployment, and use of autonomous weapon systems is the only real solution. The world cannot wait any longer to take action against such a serious global threat. It's time to get serious about banning AWS once and for all.•
The Campaign to Stop Killer Robots calls for a pre-emptive and comprehensive ban on the development, production, and use of fully autonomous weapons, also known as lethal autonomous weapons systems or killer robots. This should be achieved through new international law (a treaty), as well as through national laws and other measures.
We are concerned about weapons that operate on their own without meaningful human control. The campaign seeks to prohibit taking the human ‘out-of-the-loop’ with respect to targeting and attack decisions on the battlefield.
The Campaign to Stop Killer Robots has been established to provide a coordinated civil society response to the multiple challenges that fully autonomous weapons pose to humanity.
Take Action Out from the Shadows:
Law, Ethics, and the Prohibition of Autonomous Weapons Editorial by Ray Acheson / CCW Report from Reaching Critical Will
(April 15, 2016) -- "When we act from afar and from the shadows, we do much more harm than good." A retired Captain from the US Air Force Reserve, writing a letter to The New Yorker this week, critiqued "automated warfare" for its false sense of precision and separation of body from battlefield.
These concerns, as others raised at this CCW meeting, have serious ethical, moral, and human rights implications when it comes to increasing autonomy in weapon systems.
Thursday's plenary meetings featured discussions on many of these implications, as well as the risks for global and regional destabilisation. The compelling solution to these challenges remains a prohibition on weapon systems operating without meaningful human control.
Christof Heyns, UN special rapporteur for extrajudicial, summary, or arbitrary executions, argued that weapons operating without meaningful human control pose a range of human rights concerns. In particular, he noted, such autonomous weapon systems (AWS) risk undermining the right to life, which is essential for protecting human beings from the use of force.
Several states and civil society groups have expressed concern that AWS risk lowering the threshold for the use of force. Pablo Kalmanovitz of Universidad de los Andes, Colombia, agreed AWS risk lowering the threshold to go to war, because of the perception of minimised risk to the deploying force, but also that AWS risk lowering the use of violence within war, arguing that more attacks might be made during a conflict for the same reason.
Heyns also noted that even where AWS might be employed in capture rather than kill operations, this is still a use of force and would be subject to the same concerns if done without meaningful human control over the weapon system.
Chile and Amnesty International also highlighted what Chile's delegation described as the "terrible impact" AWS would have on human rights.
Amnesty International argued that without effective and meaningful human control, AWS threaten the right to life, right to security of person, right to human dignity, and possibly the right to freedom of peaceful assembly and should be banned.
This last element was also recently addressed by Heyns, along with the UN special rapporteur on the rights to freedom of peaceful assembly and of association, in a report on the "proper management of assemblies."
The report recommends that, "where advanced technology is employed, law enforcement officials must, at all times, remain personally in control of the actual delivery or release of force."
The threat to the right to human dignity has been repeatedly highlighted by lawyers, ethicists, and others participating in these discussions over the past three years. Heyns again reiterated that targeting by AWS reduces human beings to zeroes and ones in a computer, with serious consequences for human dignity.
Further, Heyns argued, if dignity is understood also to entail assuming responsibility for one's action, then the use of AWS can challenge this in various ways. Humans deploying AWS are not necessarily the authors of actual actions that take place -- if they do not have meaningful control over the machine, this affects their ability to make "responsible decisions".
This of course also has implications for accountability and liability. Heyns argued that control and accountability are two sides of the same coin. If one does not have control, one cannot be accountable. This lack of accountability, he suggested, in itself constitutes a violation of human rights.
The US delegation on Thursday argued that "adherence to ethical and moral norms will depend less on the inherent nature of a technology and more on its potential use by humans." But as many have pointed out over the past three years, it is the inherent nature of AWS that has serious implications for human rights and ethics, not merely "misuse" of such weapons.
This is the basic idea behind the fundamental principle of IHL that the choice of methods and means of warfare is not unlimited. It is the affront to human dignity posed by AWS that drives many ethical and societal objections to their development and deployment.
The idea of machines killing humans on the basis of software and algorithms alone is, as WILPF has noted, cynically abhorrent, or, as Kalmanovitz said, a nightmare for civilians.
Transfer of rRsk
Indeed, Kalmanovitz noted that while AWS may minimise risks to the deploying force's soldiers, it can amplify risks to civilians. He warned that programming in AWS may incorporate preferences of militarily advanced countries to shift risks away from their own forces.
Greater damage to civilians could come to be treated as proportional because preferences of the deploying force are programmed in at the expense of civilian protection.
This is a risk, Heyns and Amnesty International argue, not only for civilians living under situations of armed conflict, but also in law enforcement contexts. Heyns and Eliav Lieblich, of the Radzyner Law School in Israel, both expressed concerns about what might happen if an AWS programmed for use in armed conflict is used in a law enforcement situation. Law enforcement officials have a responsibility to protect the public, argued Heyns, but this obligation is not as strong in armed conflict.
These challenges run counter to what Denise Garcia of Northeastern University described as a global norm for preventative regulations to protect civilians.
The development and deployment of AWS, she argued, would jeopardise civilian protection, human rights laws, and the architecture and principles for sustainable peace, including disarmament and reduction of military spending. She highlighted that the development of AWS would divert resources away from peace and disarmament in violation of the UN Charter's article 26.
Deference to Machines
Yet it appears that some states at CCW wish to leave the door open to the development of further autonomy in weapons.
The US argued that "human-machine teaming in targeting has brought not only enhanced situational awareness to help reduce the immediate risk to soldiers, but also better discrimination and the ability to exercise tactical patience, Editorial, continued where additional time can be taken to ensure accurate target identification and avoid civilian casualties."
However as Amnesty International points out in an article in this edition of the CCW Report, the "Drone Papers" recently published by The Intercept "paint an alarming picture of the lethal US drones programme. According to the documents, during one five-month stretch, 90% of people killed by US drone strikes were unintended targets."
While the US argues that machines' participation in targeting can enable humans to "make better decisions," the actual deployment of such machines appears to have serious precision issues and risks undermining rules of proportionality and increasing the transfer of risk to civilians.
In this context, Heyns highlighted that because machines are faster at processing information and suggesting actions on that basis, we are becoming accustomed to deferring to machines and relying on their determinations.
Thus, even if humans could act fast enough to override an autonomous delivery of force, there might be an inclination to defer to the machine because the stakes are so high, he warned. We use machines as tools, explained Heyns, yet we sometimes think they might know better.
This challenge suggests that human beings need to have meaningful human control not just over each individual attack but also over analysing and selecting targets. It also highlights a key problem with suggestions that AWS should have the "possibility of human control" -- this seems to lack the effectiveness of the human being fully responsible for the operation of a weapon and the selection of and engagement with targets.
The need for meaningful human control over targeting and individual attacks is the basis for a prohibition on autonomous weapons systems.
Calls for this prohibition are growing. The two special rapporteurs writing on the proper management of assemblies recommended that AWS without meaningful human control should be banned.
Heyns reiterated this call during his presentation on Thursday, joining fourteen states, thousands of scientists and the Campaign to Stop Killer Robots in urging the negotiation of a treaty preventing the development, deployment, and use of AWS. Nearly all of the other panelists on Thursday supported a prohibition, reflecting the growing momentum for serious action on this issue.
As Chile's delegation said Thursday, the disarmament and human rights community has a responsibility to be ahead of the curve on AWS and act now to prevent their introduction into our shared world.
Posted in accordance with Title 17, Section 107, US Code, for noncommercial, educational purposes.