ACTION ALERT: Thank Google for Ending Contract with Pentagon

June 28th, 2018 - by admin

The World Can’t Wait & World BEYOND War & Google – 2018-06-28 00:30:41

Special to Environmentalists Against War

ACTION ALERT:
Thank Google for Ending Contract with Pentagon

The World Can’t Wait

(June 27, 2018) — In NYC this morning, a few of us representing World Can’t Wait, kNOdrones.com, World Beyond War and Refuse Fascism did outreach outside Google headquarters to thank the over 3000 Google employees who called on Google’s leaders to state that “neither Google nor its contractors will ever build warfare technology.”

We wanted to let like-minded people know there are other like-minded people out there with whom they can connect in a variety of ways, such as checking in on our websites. Eventually each of those one off connections can turn into a larger network. We went in the morning when people were coming to work so the 500 leaflets we distributed would wind up inside Google, remain on desks for a few days and perhaps be passed around.

While Google is a corporate empire competing with other corporations for military contracts, what happened with a dozen employees quitting and thousands of other workers signing a statement saying they “believe that Google should not be in the business of war” is a major historic step in creating real barriers to the military-industrial complex being able to function in business-as-usual mode.

In fact, this movement is spreading to Amazon workers as well with employees demanding that the company halt the sale of Rekognition, its facial-recognition technology, to law-enforcement agencies.


“Google Should Not Be In The Business Of War”:
Understanding the Weaponization of Artificial Intelligence

Marc Eliot Stein / World Beyond War

(June 8, 2018) — In early April, more than 3100 Google employees signed a letter that begins with the words “Google should not be in the business of war”. The letter is a response to the company’s participation in a new US Department of Defense artificial intelligence program called Project Maven, which it describes as a “customized AI surveillance engine” designed to interpret visual images from drones, and concludes with a powerful request from Google employees to their management:

“Recognizing Google’s moral and ethical responsibility, and the threat to Google’s reputation, we request that you:
1. Cancel this project immediately
2. Draft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology”

This brave act of protest and social responsibility is remarkable for its clarity, and deserves to be recognized as one of the few known cases of active scientists or workers directly objecting to their participation in the horrors of war, along with the (a href=”https://en.wikipedia.org/wiki/Russell–Einstein_Manifesto”>Russell-Einstein manifesto of 1955, which urged the abolition of war as the only path forward for a world now armed on all sides with nuclear weapons.

This remarkable letter, along with the resignation of dozens of Google employees, proved its power over a month later when Google management announced that it would not renew Project Maven after the contract is completed in March 2019, and acknowledged the “backlash” against Google’s public reputation as the primary reason behind this management decision.

While this response does not satisfy the demands in the original letter, it is clearly a step in the right direction, and shows the potential of the Google employee act of protest as a foundation to build upon as the world grapples with the realization that the US military (and, presumably, other military forces as well) are moving quickly to weaponize artificial intelligence.

World BEYOND War has published a new petition to thank these Google employees/ We should not only thank them for their courage but should also each think hard for ourselves about the implications of this new form of technology, and about our shared responsibility to avoid the worst case scenarios of its continued use.

The best way to imagine these worst case scenarios is to think about the militarization of two capabilities in which artificial intelligence already touches our everyday lives: facial recognition and driverless vehicles. As you know if you’ve ever tagged a photo on Facebook, artificial intelligence has already reached the point at which you can be easily and immediately identified by algorithm.

“Safety cameras” have also gone up all over the world, suddenly granting unknown organizations the unchecked ability to gather and match faces with “identity databases” that contain our information without our permission, knowledge or control.

The technology of driverless vehicles has also progressed with little involvement or awareness on the part of the public commons. The first death in a driverless car was in 2016, when a Tesla crashed into a truck. The first case of a pedestrian killed by a driverless vehicle was only three months ago, in March 2018, when an autonomous Uber struck down a woman crossing the street in Arizona.

These facts explain the urgency behind the Google letter, which reflects a technology industry obsessed by profit, competition and shareholder value. Here are some other points that must be understood to gain a full picture of the dilemma our world is now already in, and the harsh consequences we currently face.

Project Maven Is a Small Project:
JEDI Is the Larger Project

The Google letter called attention to Project Maven, which the company now says it will not renew. Even more importantly, the letter called attention to the existence of a larger US Department of Defense project called JEDI (Joint Enterprise Defense Infrastructure), which should be the primary focus of continuing attention to this topic.

There is little public information about this secret project, but its scope includes both artificial intelligence and cloud computing, which indicates massive computing power and scalability, as well as access to a bottomless supply of databases containing geographic and individual personal information.

Like most military technology projects, JEDI is not meant to be visible even to the taxpayers who pay for it, but we should hope that information about this large and expensive project will be released to the public. The craven choice of a project name obviously meant to invoke “Star Wars” suggests that the Department of Defense views this project with a disturbing level of grandeur and self-flattery. Yoda would not be impressed.

Google employees spoke up. Where are Amazon and Microsoft employees?

The letter signed by 3100+ Google employees calls out other companies by name:
“The argument that other firms, like Microsoft and Amazon, are also participating doesn’t make this any less risky for Google.”

Indeed, in the very lucrative field of cloud computing, Amazon is even bigger than Google. While most people think of Amazon as the world’s largest online store, software developers and technologists know of a completely different Amazon. This company leads the world in cloud computing, allowing both small and large organizations to purchase and use server capabilities quickly and easily.

Ten years ago, most companies ran their own servers. Today, most companies rent server space from Amazon. Government and military organizations are among those who rely on Amazon’s cloud services, which also include advanced artificial intelligence and database capabilities.

We should hope that Amazon employees will be inspired by their peers at Google and begin to speak up in public about the social consequences of the work they do. Will any employees of Amazon declare, as their Google peers have, that “Amazon should not be in the business of war”?

Companies like Google and Amazon have a unique commitment to open source communities.

All corporations are not alike, and indeed Google’s famous self-imposed rule to “Not Be Evil” has been taken seriously by countless open source developers who may not be Google employees but do contribute and interact with Google on open source libraries such as TensorFlow, which provides deep learning capabilities.

This is one reason why the Google employees letter was such a shock wave to the global community of open source developers. While a traditional military contractor like Raytheon or General Dynamics typically carries out all its work in private, artificial intelligence libraries like TensorFlow are unique collaborations between corporations and the public commons.

The global open source software community has been integral to the development and healthy growth of the entire Internet, and this community has always stood for an explicit sense of social responsibility.

When employees say “Google should not be in the business of war”, they speak not only for their fellow Google employees but also for the international community of open source developers who contribute to their projects.

Weaponized AI Is Now a Reality, and Not Just in USA
We should not have needed a letter from 3100 Google employees to warn us about the fact that the age of weaponized artificial intelligence is already upon us, and not just in the United States of America.

In USA, this will inevitably result in a growing public fear and hysteria about what other countries are doing in the field of weaponized AI. Military profiteers all over the world are surely counting on this arms race to escalate. This is the terrible reality of the situation we are already in.

The only sane answer is the abolition of war.

The manifesto about nuclear proliferation signed by Bertrand Russell, Albert Einstein and others in 1955 pointed to an answer that still eludes us. The only path to sanity for a world gripped by fear and primed to explode is the abolition of war. This was perfectly clear in 1955, but the leaders of the time were not capable of delivering on this hope.

Today, 63 years later, we see as clearly as ever that war only brings more war, and that technological advancements will continue to raise the stakes. The sickening vision of killer drones connected to massive real-time databases and equipped with state-of-the-art artificial intelligence capabilities chasing human beings down is no longer a vision of the future (as it was in the frightening “Metalhead” episode of “Black Mirror”, which aired only last year).

All the pieces are in place to make this sickening vision of a reality, and the courageous act of 3100+ Google employees has now revealed to us that even some corporations that have pledged to uphold a moral standard are moving forward at full speed towards this future that nobody wants.

The stakes are raised, yet again. The responsibility is on all of us — not only Google employees, not only software developers, but all of us — to solve the worst problem the world has ever known and work towards the complete abolition of war.


Thank You to Google Employees
Who Reject the Business of War

Target: Google employees and all workers everywhere

World Beyond War

We the undersigned applaud those employees of Google who resist allowing Google to work in the business of war. We want to express our deep gratitude for your willingness to take this critical stand. The particular new dangers of automated weapons provide one more reason to make mass killing a thing of the past and to move public policies to a world beyond war.

We further want to encourage all workers at all companies in all countries, including at Google, to expand this effort until it results in a firm commitment to reject all military contracts — until every company meets the demand of Google’s employees to “publicize, and enforce a clear policy stating that neither [this company] nor its contractors will ever build warfare technology.”

*****
Note that Google’s new statement of principles says “[W]e will not design or deploy AI in the following application areas: . . . Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” but says nothing about non-AI military contracts.


AI at Google: Our Principles
Sundar Pichai / Google CEO

At its heart, AI is computer programming that learns and adapts. It can’t solve every problem, but its potential to improve our lives is profound. At Google, we use AI to make products more useful — from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy.

Beyond our products, we’re using AI to help people tackle urgent problems. A pair of high school students are building AI-powered sensors to predict the risk of wildfires. Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness. These clear benefits are why Google invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code.

We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.

So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.

We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.

Objectives for AI applications
We will assess AI applications in view of the following objectives. We believe that AI should:
1. Be socially beneficial.
The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment.

As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

2. Avoid creating or reinforcing unfair bias.
AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

3. Be built and tested for safety.
We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

4. Be accountable to people.
We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

5. Incorporate privacy design principles.
We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

6. Uphold high standards of scientific excellence.
Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.

We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

7. Be made available for uses that accord with these principles.
Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors.

* Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use.

* Nature and uniqueness: whether we are making available technology that is unique or more generally available.

* Scale: whether the use of this technology will have significant impact.

* Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions.

AI Applications We Will Not Pursue
In addition to the above objectives, we will not design or deploy AI in the following application areas:
1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

3. Technologies that gather or use information for surveillance violating internationally accepted norms.

4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas.

These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

AI for the Long Term
While this is how we’re choosing to approach AI, we understand there is room for many voices in this conversation. As AI technologies progress, we’ll work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will continue to share what we’ve learned to improve AI technologies and practices.

We believe these principles are the right foundation for our company and the future development of AI. This approach is consistent with the values laid out in our original Founders’ Letter back in 2004. There we made clear our intention to take a long-term perspective, even if it means making short-term tradeoffs. We said it then, and we believe it now.

Posted in accordance with Title 17, Section 107, for noncommercial, educational purposes.