Campaign to Stop Killer Robots – 2018-06-07 17:26:02
Google, Other Companies Must Endorse Ban
Campaign to Stop Killer Robots
Google and its parent company Alphabet are starting to address some ethical concerns raised by the development of artificial intelligence (AI) and machine learning, but, as yet, have not taken a position on the unchecked use of autonomy and AI in weapon systems.
These and other technology companies such as Amazon, Microsoft, and Oracle, should publicly endorse the call to ban fully autonomous weapons and commit to never help develop these weapons.
Doing so would support the rapidly-expanding international effort to ensure the decision to take human life is never delegated to a machine in warfare or in policing and other circumstances.
In recent months, calls have mounted for Google to commit to never to help create weapon systems that would select and attack targets without meaningful human control. Last month, more than four thousand Google employees issued an open letter demanding the company adopt a clear policy stating that neither Google nor its contractors will ever build “warfare technology.”
On 14 May, more than 800 scholars, academics, and researchers who study, teach about, and develop information technology released a statement in solidarity with the Google employees that calls on the companies to support an international treaty to prohibit autonomous weapon systems and commit not to use the personal data that the company collects for military purposes.
In the Guardian on 16 May, three co-authors of the academic letter highlight key questions that Google faces, such as: “Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons? Should it proceed despite moral and ethical opposition by several thousand of its own employees?”
Previously, in a 12 March letter to the heads of Google and Alphabet, the Campaign to Stop Killer Robots recommended the companies adopt “a proactive public policy” by committing to never engage in work aimed at the development and acquisition of fully autonomous weapons systems, also known as lethal autonomous weapons systems, and publicly support the call to for a ban.
All these letters express concern over Google’s involvement in a Department of Defense-funded project to “assist in object recognition on unclassified data” contained in surveillance video footage collected by military drones. According to the Pentagon, Project Maven involves “developing and integrating computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that DoD collects every day in support of counterinsurgency and counterterrorism operations.”
The project, which began last year, seeks to turn the “enormous volume of data available to DoD into actionable intelligence and decision-quality insights at speed.”
Project Maven raises ethical and other questions about the appropriate use of machine learning and artificial intelligence (AI) for military purposes. The Campaign to Stop Killer Robots is concerned that the AI-driven identification of objects could quickly blur or move into AI-driven identification of ‘targets’ as a basis for the direction of lethal force.
This could give machines the capacity to make a determination about what is a target, which would be an unacceptably broad use of the technology. That’s why the campaign is working to retain meaningful human control of the critical functions of identifying, selecting and engaging targets.
Google representatives are engaging in a dialogue with the Campaign to Stop Killer Robots and last month provided campaign coordinator Mary Wareham with a statement that says its work on Project Maven is “for non-offensive purposes and using open-source object recognition software available to any Google Cloud customer.
The models are based on unclassified data only. The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.”
In July 2015, high-profile Google employees including research director Peter Norvig, scholar Geoffrey Hinton, and AI chief Jeff Dean co-signed an open letter endorsed by thousands of AI experts that outlined the dangers posed by lethal autonomous weapons systems and called for a new treaty to ban the weapons.
At Google DeepMind, CEO Demis Hassabis, co-founder Mustafa Suleyman and twenty engineers, developers and research scientists also signed the 2015 letter. The following year in a submission to a UK parliamentary committee Google DeepMind stated:
“We support a ban by international treaty on lethal autonomous weapons systems that select and locate targets and deploy lethal force against them without meaningful human control. We believe this is the best approach to averting the harmful consequences that would arise from the development and use of such weapons. We recommend the government support all efforts towards such a ban.”
Last month, Amazon’s Jeff Bezos expressed concern at the possible development of fully autonomous weapons, which he described as “genuinely scary,” and proposed a multilateral treaty to regulate them. The Campaign to Stop Killer Robots welcomes these remarks and encourages Amazon to endorse the call for a new treaty to prohibit fully autonomous weapons and pledge not to contribute to the development of these weapons, as Clearpath Robotics and others have done.
Issuing ethical principles means little if a company fails to act on fundamental challenges raised by military applications of autonomy and AI. Responsible companies should take seriously and publicly support the increasing calls for states to urgently negotiate a new treaty to prohibit fully autonomous weapons.
For more information, see:
* Google employees’ letter (April)
* Academic letter (14 May)
* Campaign to Stop Killer Robots letters to Google and Alphabet (12 March) and Amazon (16 May).
Five Years of Campaigning, CCW Continues
Stop Killer Robots
April 2018 marks five years since the launch of Campaign to Stop Killer Robots. It is also the fifth time since 2014 that governments are convening at the Convention on Conventional Weapons (CCW) in Geneva to discuss concerns over lethal autonomous weapons systems, also known as fully autonomous weapons or “killer robots.”
The campaign urges states to participate in the CCW Group of Governmental Experts meeting, which opens at the United Nations (UN) on Monday, 9 April, and to commit to retain meaningful human control of weapons systems and over individual attacks.
The following “Frequently Asked Questions” provide background on the meeting of the CCW GGE meeting, which representatives from more than 80 countries are expected to attend. Media wishing to attend the meeting and cover it must be accredited to the UN in Geneva or announced to the UN Information Service (UNIS).
The CCW meeting will not be broadcast live via the web or other means, but selected country statements will be posted online and campaigners will provide live updates on social media, particularly Twitter, using the hashtag #CCWUN. CCW delegates and accredited media are welcome to attend the campaign’s lunchtime side event briefings for CCW delegates on Monday, 9 April and Wednesday, April 11 in Conference Room XXIII.
Frequently Asked Questions
What is the Convention on
Conventional Weapons (CCW)?
The 1980 Convention on Conventional Weapons (CCW) is a framework instrument that contains five separate protocols, including Protocol IV which preemptively banned blinding lasers. A total of 125 nations are “high contracting” or state parties to the CCW, including all five permanent members of the UN Security Council. CCW meetings are open to all states, UN agencies, the International Committee of the Red Cross (ICRC), and registered non-governmental organizations including the Campaign to Stop Killer Robots.
At the end of 2013, states agreed that the CCW should begin considering questions relating to lethal autonomous weapons systems. More than 80 states participated in three exploratory “informal meetings of experts” on the topic in 2014-2016.
At the end of 2016, states formalized their work by establishing a “Group of Governmental Experts” or GGE, which first met in November 2017. The CCW will meet for twice as long in 2018, with the first GGE meeting on 9-13 April and the second on 27-31 August.
Why the concern about killer robots?
Armed drones and other autonomous weapons systems with decreasing levels of human control are currently in use and development by high-tech militaries including the US, China, Israel, South Korea, Russia, and the UK.
The concern is that a variety of available sensors and advances in artificial intelligence are making it increasingly practical to design weapons systems that would target and attack without any meaningful human control. If the trend towards autonomy continues, humans may start to fade out of the decision-making loop for certain military actions, perhaps retaining only a limited oversight role, or simply setting broad mission parameters.
Several states, the Campaign to Stop Killer Robots, artificial intelligence experts, faith leaders, and Nobel Peace Laureates, among others, fundamentally object to permitting machines to determine who or what to target on the battlefield or in policing, border control, and other circumstances. Such a far-reaching development raises an array of profound ethical, human rights, legal, operational, proliferation, technical, and other concerns.
While the capabilities of future technology are uncertain, there are strong reasons to believe that devolving more decision making over targeting to weapons systems themselves will erode the fundamental obligation that rules of international humanitarian law (IHL) and international human rights law be applied by people, and with sufficient specificity to make them meaningful.
Furthermore, with an erosion of human responsibility to apply legal rules at an appropriate level of detail there will likely come an erosion of human accountability for the specific outcomes of such attacks. Taken together, such developments would produce a stark dehumanization of military or policing processes.
What will happen at the April CCW meeting?
States participating in the 2018 CCW meetings will not take any formal decisions, but aim to produce a GGE report with findings and proposals on the way ahead that will be adopted at the CCW’s Meeting of High Contracting Parties on 21-23 November 2018, under the presidency of Ambassador Janis Karklins of Latvia.
GGE chair Ambassador Amandeep Singh Gill of India is responsible for organizing the 2018 GGE meetings with the support of the Geneva office of the UN Office for Disarmament Affairs. According to the provisional programme of work the GGE meeting will open on Monday, 9 April with a general exchange of views. A panel on Thursday morning will review the “potential military applications of related technologies.”
CCW delegates will spend most of the week in a series of three-hour sessions, starting with characteristics or definitions of lethal autonomous weapons systems (3 x sessions) then human-machine interaction (2 x sessions) and finally, the “way ahead” on Friday, to consider “possible options for addressing the humanitarian and international security challenges posed by emerging technologies in the area of lethal autonomous weapons systems in the context of the objectives and purposes of the Convention without prejudging policy outcomes and taking into account past, present and future proposals.”
The GGE chair has invited states to prepare for the GGE meetings by producing and submitting working papers in advance. Read the CCW working papers provided by the United States, Russia, Poland, and the Non-Aligned Movement. The CCW does not accept working papers from NGOs so the Campaign to Stop Killer Robots has prepared a 3 Â½ page Briefing Note to guide states that is also available in French and Spanish.
The campaign encourages states come prepared to:
1. Elaborate the key characteristics for a working definition of lethal autonomous weapons systems, which the campaign views as systems operating without meaningful human control in the “critical functions” of identifying, selecting and applying force to targets;
2. Identify the relevant “touchpoints” of human/machine interaction through which the necessary human control over weapons systems can be enacted and ensured; and
3. Outline the preferred pathway forward, resisting measures that fall short of a legally binding instrument and calling for negotiations to begin.
What is a “killer robot”?
A weapons system that identifies, selects and employs force against targets without meaningful human control should be considered a lethal autonomous weapons system. It would have no human in the decision-making loop when the system selects and engages the target of an attack. Applying human control only as a function of design and in an initial deployment stage would fail to fulfill the IHL obligations that apply to commanders in relation to each “attack.”
While the exact wording of legal definitions would be finalized during negotiations as required, a shared understanding of key characteristics and their relationship to key terms would facilitate effective discussion at the CCW meeting.
Why the need for “human control”?
Sufficient human control over the use of weapons, and of their effects, is essential to ensuring that the use of a weapon is morally justifiable and can be legal. Such control is also required as a basis for accountability over the consequences of the use of force.
To demonstrate that such control can be exercised, states must show that they understand the process by which specific systems identify individual target objects and understand the context, in space and time, where the application of force may take place.
Given the development of greater autonomy in weapon systems, states should make it explicit that meaningful human control is required over individual attacks and that weapon systems that operate without meaningful human control should be prohibited. For human control to be meaningful, the technology must be predictable, the user must have relevant information, and there must be the potential for timely human judgment and intervention.
States should come prepared to the CCW meeting provide their views on the key “touchpoints” of human/machine interaction in weapons systems. These include design aspects, such as how certain features may be encoded as target objects; how the area or boundary of operation may be fixed; the time period over which a system may operate; and, any possibility of human intervention to terminate the operation and recall the weapon system.
Based on these touchpoints, states should be prepared to explain how control is applied over existing weapons systems, especially those with certain autonomous or automatic functions.
Will states at the CCW ban killer robots?
The 2018 GGE meetings are not negotiations and the CCW talks are not yet focused on working towards a specific outcome such as a new CCW protocol to ban or regulate these weapons systems. However most states agree that some action should be taken to address concerns over fully autonomous weapons, even if they disagree on what form it should take.
At the CCW, France and Germany have proposed the creation of a political declaration and code of conduct, while others have called for greater transparency, especially concerning national processes to conduct legal reviews of new weapons systems. As presented so far, such proposals have lacked the fundamental moral and logical coherence necessary to make them credible. They appear to merely reflect a desire to be seen as doing “something” rather than a firm determination to avoid dehumanizing the use of force.
The CCW should articulate first and foremost a legal commitment to ensuring meaningful human control and a constraint on the development of autonomy in the critical functions of weapons systems. The GGE meetings should recommend that states at the CCW annual meeting in November 2018 adopt a mandate to begin negotiations on a legally binding instrument on lethal autonomous weapons systems. States should express support for that recommendation.
They should also express commitment to work in coordination with like-minded states, UN agencies, international organizations, civil society, and other stakeholders to conclude a legally binding instrument prohibiting the development, production, and use of lethal autonomous weapons systems by the end of 2019. If the CCW is not up to this task, other diplomatic options should be explored. To build support for an international agreement, states should also quickly adopt national legislation banning lethal autonomous weapons systems.
Rapid progress is possible, but the window for credible preventative action in the CCW is fast closing. The talks could and should result in a new CCW protocol requiring meaningful human control over attacks and prohibiting lethal autonomous weapons systems (systems that do not allow for that human control).
There is already a sound foundation from which to begin negotiations. States share the same concerns over the multiple ethical, legal, operational, proliferation, technical and other challenges raised by lethal autonomous weapon systems. Nearly all of the 90 countries participating in this debate have acknowledged the need to retain meaningful or necessary human control over the use of force involving autonomous weapons. Several have committed not to acquire or develop fully autonomous weapons, while 22 countries have endorsed the call for a ban.*
Who will participate from the
Campaign to Stop Killer Robots?
The Campaign to Stop Killer Robots delegation to the CCW meeting of experts is comprised 35 campaigners from member NGOs in countries including Cameroon, Canada, Colombia, Egypt, Germany, Japan, the Netherlands, Spain, Switzerland, UK, US, and Zimbabwe. It includes key spokespersons such as roboticist Noel Sharkey.
Following the precedent set by previous CCW meetings, the Campaign to Stop Killer Robots will participate in every session, make statements, circulate documents, and convene side events, on Monday, 9 April and Wednesday, April 11 in Conference Room XXIII. Women comprise half of the Campaign to Stop Killer Robots delegation to the CCW meeting, including side event speakers and media spokespersons.
What is the Human Rights
Council about killer robots?
The first multilateral debate on killer robots took place at the Human Rights Council in May 2013, but states have not considered this topic at the Council since then. Countries such as Austria, Brazil, Ireland, Sierra Leone, and South Africa affirm the relevance of human rights considerations and the Council in the international debate over fully autonomous weapons.
In February 2016, the UN Special Rapporteur on extrajudicial, summary or arbitrary executions and the Special Rapporteur on the rights to freedom of peaceful assembly and of association issued a report recommending that “autonomous weapons systems that require no meaningful human control should be prohibited.”
The UN Special Rapporteur on extrajudicial, summary or arbitrary executions last addressed a CCW meeting on lethal autonomous weapons in April 2016. Human rights are no longer considered relevant in the CCW talks, which raises the question of how to address human rights concerns with these weapons, particularly their use in law enforcement, border control and other circumstances outside of armed conflict.
* 22 states support the call to ban fully autonomous weapons: Algeria, Argentina, Bolivia, Brazil, Chile, Costa Rica, Cuba, Ecuador, Egypt, Ghana, Guatemala, Holy See, Iraq, Mexico, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Uganda, Venezuela, and Zimbabwe.
Please follow the campaign’s social media accounts on Instagram, Facebook, and Twitter.
Read the CCW working papers provided by the United States, Russia, Poland, and the Non-Aligned Movement.
For more information see:
* Campaign Press Release (also available in Spanish)
* Campaign Briefing Note (also available in French and Spanish), Delegation List, Side Event Flyers (Monday, 9 April and Wednesday, April 11).
* CCW webpage for the April GGE meeting, incl. provisional programme of work
* Reaching Critical Will webpage for the April GGE meeting.
* Campaign reports of the previous the CCW meetings on killer robots held November 2017, April 2016, April 2015, and May 2014
Posted in accordance with Title 17, Section 107, US Code, for noncommercial, educational purposes.