Google Employees Say “No” to Working for the Pentagon

June 3rd, 2018 - by admin & Gizmodo & The New York Times & – 2018-06-03 13:03:13

AI Weekly: The end of Project Maven at Google shows the power of tech workers who take a stand

The End of Project Maven at Google
Shows the Power of Tech Workers Who Take a Stand

Khari Johnson /

(June 1, 2018) — We’re roughly halfway through 2018, and one of the most important AI stories to emerge so far is a href=””>Project Maven and its fallout at Google. The program to use AI to analyze drone video footage began last year, and this week we learned of the Pentagon’s plans to expand Maven and establish a Joint Artificial Intelligence Center.

We also learned that Google believed it would make hundreds of millions of dollars from participating in the Maven project and that Maven was reportedly tied directly to a cloud computing contract worth billions of dollars. Today, news broke that Google will discontinue its Maven contract when it expires next year.

The company is reportedly drafting a military projects policy that is due out in the coming weeks. According to the New York Times, the policy will include a ban on projects related to autonomous weaponry.

Most revealing in all of this are the words of leaders like Google Cloud chief scientist Dr. Fei-Fei Li. In emails obtained by the New York Times, written last fall while Google was considering how to announce its participation in Maven, executives expressed awareness of just how divisive an issue autonomous weaponry can be.

“Avoid at ALL COSTS any mention or implication of AI,” Li wrote. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google . . . I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.”

Since Google’s involvement with Maven became public in March, the project has certainly attracted attention from some members of the press. I’ve written that Google should listen to its employees and stay out of the business of war and that Maven reflects the need for a Hippocratic oath for AI practitioners, but the backlash isn’t just coming from journalists.

Inside Google, about a dozen employees resigned in protest, and more than 3,000 employees — including AI chief Jeff Dean — have signed letters stating that Google shouldn’t participate in the creation of autonomous weaponry. Outside Google, petitions from organizations like the Tech Workers Coalition and International Committee for Robot Arms Control have also attracted signatures from the broader tech and AI community.

As the debate wages on, one overlooked or little-known fact about Google’s participation in Maven has emerged: Google wasn’t the only tech company invited to participate. IBM and smaller firms like Colorado-based DigitalGlobe have also been invited to participate in the program, according to Gizmodo.

AI isn’t new. Military usage of AI isn’t either, but as AI goes beyond just offering personalized results when you open an app, the ethical stance AI practitioners choose to take can play a role in defining how this discipline is applied to virtually every sector of business, government, and society.

For AI coverage, send news tips to Kyle Wiggers and Khari Johnson, and guest post submissions to Cosette Jarrett — and be sure to bookmark our AI Channel.

Google is drafting a military projects policy

Google says it will formulate a policy around defense and military contracts, following the fallout from its involvement in the Pentagon’s controversial Project Maven program.

Eric Schmidt says Elon Musk is ‘exactly wrong’ about AI
When former Google CEO Eric Schmidt was asked about Elon Musk’s warnings about AI, he had a succinct answer: “I think Elon is exactly wrong.” (via TechCrunch)

Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program
Kate Conger / Gizmodo

(June 1, 2018) — Google will not seek another contract for its controversial work providing artificial intelligence to the US Department of Defense for analyzing drone footage after its current contract expires.

Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract, Greene said. The meeting, dubbed Weather Report, is a weekly update on Google Cloud’s business.

Google would not choose to pursue Maven today because the backlash has been terrible for the company, Greene said, adding that the decision was made at a time when Google was more aggressively pursuing military work. The company plans to unveil new ethical principles about its use of AI next week. A Google spokesperson did not immediately respond to questions about Greene’s comments.

Google’s decision to provide artificial intelligence to the Defense Department for the analysis of drone footage has prompted backlash from Google employees and academics. Thousands of employees have signed a petition asking Google to cancel its contract for the project, nicknamed Project Maven, and dozens of employees have resigned in protest.

Google, meanwhile, defended its work on Project Maven, with senior executives noting that the contract is of relatively little value and that its contribution amounts merely to providing the Defense Department with open-source software.

But internal emails reviewed by Gizmodo show that executives viewed Project Maven as a golden opportunity that would open doors for business with the military and intelligence agencies. The emails also show that Google and its partners worked extensively to develop machine learning algorithms for the Pentagon, with the goal of creating a sophisticated system that could surveil entire cities.

The two sets of emails reveal that Google’s senior leadership was enthusiastically supportive of Project Maven—especially because it would set Google Cloud on the path to win larger Pentagon contracts—but deeply concerned about how the company’s involvement would be perceived. The emails also outline Google’s internal timeline and goals for Project Maven.

In order to work on Project Maven, Google Cloud faced a challenge. The company would need to use footage gathered by military drones to build its machine learning models, but it lacked the official government authorization to hold that kind of sensitive data in its cloud.

That authorization, known as FedRAMP, establishes security standards for cloud services that contract with the government. But Google didn’t have it—so it had to rely on other geospatial imagery for its early work on Project Maven. According to an email written by Aileen Black, an executive director overseeing Google’s business with the US government, Project Maven sponsored Google’s application for higher levels of FedRAMP authorization, Security Requirements Guide 4 and 5. “They are really fast tracking our SRG4 ATO (security cert),” she wrote. “This is priceless.”

In late March of this year, Google announced that it had been granted provisional FedRAMP 4 authorization to operate, or ATO. “With this ATO, Google Cloud Platform has demonstrated its commitment to extend to government customers,” Suzanne Frey, Google’s director of trust, security, privacy, and compliance, told reporters during a press call.

Obtaining this authorization was crucial not just for Project Maven, but for Google’s future pursuit of other government contracts. Google is reportedly competing for a Pentagon cloud computing contract worth $10 billion.

Greene had told concerned employees during meetings that Google’s contact with the Department of Defense was worth only $9 million, Gizmodo first reported—a relatively small figure as far as government contracts go.

However, internal emails reviewed by Gizmodo show that the initial contract was worth at least $15 million, and that the budget for the project was expected to grow as high as $250 million. This set of emails, first reported by the New York Times, show senior executives in Google Cloud worrying about how Google’s involvement in Project Maven would be perceived once it became public.

In another set of emails not previously made public, Google employees working on Project Maven described meeting with Lieutenant General Jack Shanahan, who has spearheaded the Maven initiative, and other government representatives at Google’s offices. These emails describe technical milestones for Maven and describe Google’s in-depth efforts to develop the technology.

Google secured the Project Maven contract in late September, the emails reveal, after competing for months against several other “AI heavyweights” for the work. IBM was in the running, as Gizmodo reported last month, along with Amazon and Microsoft. One of the terms of Google’s contract with the Defense Department was that Google’s involvement not be mentioned without the company’s permission, the emails state.

“It gives me great pleasure to announce that the US Undersecretary of Defense for Intelligence—USD(I)—has awarded Google and our partners a contract for $28M, $15M of which is for Google ASI, GCP, and PSO,” Scott Frohman, a defense and intelligence sales lead at Google, wrote in a September 29, 2017 email.

“Maven is a large government program that will result in improved safety for citizens and nations through faster identification of evils such as violent extremist activities and human right abuses. The scale and magic of GCP [Google Cloud Platform], the power of Google ML [machine learning], and the wisdom and strength of our people will bring about multi-order-of-magnitude improvements in safety and security for the world.”

Other emails describe meetings in late 2017 with Pentagon representatives at Google’s Mountain View and Sunnyvale offices. “Customer considers Cloud AI team the core of the MAVEN program, where everything else will be built to test and deploy our ML models,” one message reads. Google planned to deliver the product of its work at the end of March, and continue refining it through June.

The company also assigned more than 10 of its employees to work on Project Maven. When Gizmodo reported Google’s involvement in the project earlier this year, Google downplayed its work, saying it had merely provided its open-source TensorFlow software to the Pentagon.

However, Google intended to build a “Google-earth-like” surveillance system that would allow Pentagon analysts to “click on a building and see everything associated with it” and build graphs of objects like vehicles, people, land features, and large crowds for “the entire city,” states one email recapping a Maven kickoff meeting with Pentagon representatives. Google’s artificial intelligence would bring “an exquisite capability” for “near-real time analysis,” the email said.

By December, Google had already demonstrated a high accuracy rate in classifying images for Project Maven. Working with imagery provided by a geospatial imagery firm, DigitalGlobe, and data labeling provided by an artificial intelligence firm, CrowdFlower, Google was able to build a system that could detect vehicles missed by expert image labelers.

“Customer’s leadership team was extremely happy with your work, your active participation, and the early results we demonstrated using validation dataset,” Reza Ghanadan, a senior engineering program manager at Google, wrote. “Among other things, the results showed several cases that with 90+% confidence the model detected vehicles which were missed by expert labelers.”

Despite the excitement over Google’s performance on Project Maven, executives worried about keeping the project under wraps. “It’s so exciting that we’re close to getting MAVEN! That would be a great win,” Fei-Fei Li, chief scientist for AI at Google Cloud, wrote in a September 24, 2017 email. “I think we should do a good PR on the story of DoD collaborating with GCP from a vanilla cloud technology angle (storage, network, security, etc.), but avoid at ALL COSTS any mention or implication of AI.”

“Google is already battling with privacy issues when it comes to AI and data; I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry,” she added.

Greene told employees today that the conversation about the ethical use of artificial intelligence is huge and that Google is at the forefront of that conversation. “It is incumbent on us to show leadership,” Greene said, according to a source.

Google Will Not Renew Pentagon Contract That Upset Employees
The New York Times

SAN FRANCISCO (June 1, 2018) — Google, hoping to head off a rebellion by employees upset that the technology they were working on could be used for lethal purposes, will not renew a contract with the Pentagon for artificial intelligence work when a current deal expires next year.

Diane Greene, who is the head of the Google Cloud business that won a contract with the Pentagon’s Project Maven, said during a weekly meeting with employees on Friday that the company was backing away from its A.I. work with the military, according to a person familiar with the discussion but not permitted to speak publicly about it.

Google’s work with the Defense Department on the Maven program, which uses artificial intelligence to interpret video images and could be used to improve the targeting of drone strikes, roiled the internet giant’s work force. Many of the company’s top A.I. researchers, in particular, worried that the contract was the first step toward using the nascent technology in advanced weapons.

But it is not unusual for Silicon Valley’s big companies to have deep military ties. And the internal dissent over Maven stands in contrast to Google’s biggest competitors for selling cloud-computing services — and Microsoft — which have aggressively pursued Pentagon contracts without pushback from their employees.

Google’s self-image is different — it once had a motto of “don’t be evil.” A number of its top technical talent said the internet company was betraying its idealistic principles, even as its business-minded officials worried that the protests would damage its chances to secure more business from the Defense Department.

About 4,000 Google employees signed a petition demanding “a clear policy stating that neither Google nor its contractors will ever build warfare technology.” A handful of employees also resigned in protest, while some were openly advocating the company to cancel the Maven contract.

Months before it became public, senior Google officials were worried about how the Maven contract would be perceived inside and outside the company, The New York Times reported this week. By courting business with the Pentagon, they risked angering a number of the company’s highly regarded A.I. researchers, who had vowed that their work would not become militarized.

Jim Mattis, the defense secretary, had reached out to tech companies and sought their support and cooperation as the Pentagon makes artificial intelligence a centerpiece of its weapons strategy. The decision made by Google on Friday is a setback to that outreach.

But if Google drops out of some or all of the competition to sell the software that will guide future weaponry, the Pentagon is likely to find plenty of other companies happy to take the lucrative business. A Defense Department spokeswoman did not reply to a request for comment on Friday.

Ms. Greene’s comments were reported earlier by Gizmodo.

The money for Google in the Project Maven contract was never large by the standards of a company with revenue of $110 billion last year — $9 million, one official told employees, or a possible $15 million over 18 months, according to an internal email.

But some company officials saw it as an opening to much greater revenue down the road. In an email last September, a Google official in Washington told colleagues she expected Maven to grow into a $250 million-a-year project, and eventually it could have helped open the door to contracts worth far more; notably a multiyear, multibillion-dollar cloud computing project called JEDI, or Joint Enterprise Defense Infrastructure.

Whether Google’s Maven decision is a short-term reaction to employee protests and adverse news coverage or reflects a more sweeping strategy not to pursue military work is unclear. The question of whether a particular contract contributes to warfare does not always have a simple answer.

When the Maven work came under fire inside Google, company officials asserted that it was not “offensive” in nature. But Maven is using the company’s artificial intelligence software to improve the sorting and analysis of imagery from drones, and some drones rely on such analysis to identify human targets for lethal missile shots.

Google management had told employees that it would produce a set of principles to guide its choices in the use of artificial intelligence for defense and intelligence contracting. At Friday’s meeting, Ms. Greene said the company was expected to announce those guidelines next week.

Google has already said that the new artificial intelligence principles under development precluded the use of A.I. in weaponry. But it was unclear how such a prohibition would be applied in practice and whether it would affect Google’s pursuit of the JEDI contract.

Defense Department officials are themselves wrestling with the complexity of their move into cloud computing and artificial intelligence. Critics have questioned the proposal to give the entire JEDI contract, which could extend for 10 years, to a single vendor. This week, officials announced they were slowing the contracting process down.

Dana White, the Pentagon spokeswoman, said this week that the JEDI contract had drawn “incredible interest” and more than 1,000 responses to a draft request for proposals. But she said officials wanted to take their time.

“So, we are working on it, but it’s important that we don’t rush toward failure,” Ms. White said. “This is different for us. We have a lot more players in it. This is something different from some of our other acquisition programs because we do have a great deal of commercial interest.”

Ms. Greene said the company probably would not have sought the Maven work if company officials had anticipated the criticism, according to notes on Ms. Greene’s remarks taken by a Google employee and shared with The Times.

Another person who watched the meeting added that Ms. Greene said Maven had been “terrible for Google” and that the decision to pursue the contract was done when Google was more aggressively going after military work.

Google does other, more innocuous business with the Pentagon, including military advertising on Google properties and Google’s ad platform, as well as providing web apps like email.

Meredith Whittaker, a Google A.I. researcher who was openly critical of the Maven work, wrote on Twitter that she was “incredibly happy about this decision, and have a deep respect for the many people who worked and risked to make it happen. Google should not be in the business of war.”

I am incredibly happy about this decision, and have a deep respect for the many people who worked and risked to make it happen. Google should not be in the business of war. . . .
— Meredith Whittaker (@mer__edith) 4:28 PM – Jun 1, 2018

Even though the internal protest has carried on for months, there was no indication that employee criticism of the deal was dying down.

Earlier this week, one Google engineer — on the company’s internal message boards — proposed the idea of employees protesting Google Cloud’s conference at the Moscone Center in San Francisco in July with a campaign called “Occupy Moscone Center,” fashioned after the Occupy Wall Street protests.

That engineer resigned from the company this week in protest of Maven and planned for Friday to be his last day. But he said he was told on Friday morning to leave immediately, according to an email viewed by The Times.

Peter W. Singer, who studies war and technology at New America, a Washington research group, said many of the tools the Pentagon was seeking were “neither inherently military nor inherently civilian.” He added, “This is not cannons and ballistic missiles.” The same software that speeds through video shot with armed drones can be used to study customers in fast-food restaurants or movements on a factory floor.

Mr. Singer also said he thought Google employees who denounced Maven were somewhat naïve, because Google’s search engine and the video platform of its YouTube division have been used for years by warriors of many countries, as well as Al Qaeda and the Islamic State.

“They may want to act like they’re not in the business of war, but the business of war long ago came to them,” said Mr. Singer, author of a book examining such issues called “LikeWar,” scheduled for publication in the fall. [re Singer, see

Project Maven to Deploy Computer Algorithms to War Zone by Year’s End

WASHINGTON (July 21, 2017) — Winning wars with computer algorithms and artificial intelligence were among the topics that Defense Department intelligence officials discussed during a recent Defense One Tech Summit here.

A stand-alone exhibit titled, “Innovations in Defense: Artificial Intelligence and the Challenge of Cybersecurity,” features Pittsburgh-based team ForAllSecure’s Mayhem Cyber Reasoning System. The system took first place at the August 2016 Cyber Grand Challenge finals, beating out six other computers. The Mayhem CRS is now on display at the Smithsonian’s National Museum of American History. The exhibit was produced by the Lemelson Center for the Study of Invention and Innovation. The exhibit will run through Sept. 17, 2017.

Presenters included Marine Corps Col. Drew Cukor, chief of the Algorithmic Warfare Cross-Function Team in the Intelligence, Surveillance and Reconnaissance Operations Directorate-Warfighter Support in the Office of the Undersecretary of Defense for Intelligence. [see DOD memo link below]

By the end of the calendar year, the department will field advanced computer algorithms onto government platforms to extract objects from massive amounts of moving or still imagery, Cukor said in his remarks.

“People and computers will work symbiotically to increase the ability of weapon systems to detect objects,” Cukor added. “Eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they’re doing now. That’s our goal.”

A computer algorithm is a set of rules to be followed during problem-solving operations. Cukor described an algorithm as about 75 lines of Python code “placed inside a larger software-hardware container.”

He said the immediate focus is 38 classes of objects that represent the kinds of things the department needs to detect, especially in the fight against the Islamic State of Iraq and Syria.

Project Maven
The effort to help a workforce increasingly overwhelmed by incoming data, including millions of hours of video, began in April when then-Deputy Defense Secretary Bob Work announced in a memo that he was establishing an Algorithmic Warfare Cross-Functional Team, overseen by the undersecretary of defense for intelligence, to work on something he called Project Maven.

“As numerous studies have made clear, the department of defense must integrate artificial intelligence and machine learning more effectively across operations to maintain advantages over increasingly capable adversaries and competitors,” Work wrote.

Exploitation Analyst airmen assigned to the 41st Intelligence Squadron have begun using advanced mobile desktop training that uses an environment to challenge each individual analyst in cyberspace maneuvers to achieve mission objectives at Fort. George G. Meade, Md. Air Force Illustration by Staff Sgt. Alexandre Montes

“Although we have taken tentative steps to explore the potential of artificial intelligence, big data and deep learning,” he added, “I remain convinced that we need to do much more and move much faster across DoD to take advantage of recent and future advances in these critical areas.”

Project Maven focuses on computer vision — an aspect of machine learning and deep learning — that autonomously extracts objects of interest from moving or still imagery, Cukor said. Biologically inspired neural networks are used in this process, and deep learning is defined as applying such neural networks to learning tasks.

“This effort is an announcement . . . that we’re going to invest for real here,” he said.

Working With Industry
Rapidly delivering artificial intelligence to a combat zone won’t be easy, Cukor said.

“There is no ‘black box’ that delivers the AI system the government needs, at least not now,” he said. “Key elements have to be put together . . . and the only way to do that is with commercial partners alongside us.”

Work to be accomplished over the next few months includes triaging and labeling data so the algorithms can be trained, the colonel explained.

The Intelligence, Surveillance and Reconnaissance Division at the Combined Air Operations Center at Al Udeid Air Base, Qatar, provides a common threat and targeting picture that are key to planning and executing theater wide aerospace operations to meet the Combined Forces Air Component commander’s objectives. They are also the means by which the effects of air and space operations are measured. Air Force photo

“That work is inherently governmental and so we have a large group of people — sophisticated analysts and engineers — who are going through our data and cleaning it up. We also have a relationship with a significant data-labeling company that will provide services across our three networks — the unclassified and the classified networks — to allow our workforce to label our data and prepare it for machine learning,” Cukor said.

The department has a significant effort ongoing to procure computational power, including graphic processing units that allow training of machine-learning algorithms, he said. An algorithmic development contract also is in process — the department will go through a competitive selection process to find vendors that can provide algorithms against DoD data.

“You don’t buy AI like you buy ammunition,” he added. “There’s a deliberate workflow process and what the department has given us with its rapid acquisition authorities is an opportunity for about 36 months to explore what is governmental and [how] best to engage industry [to] advantage the taxpayer and the warfighter, who wants the best algorithms that exist to augment and complement the work he does.”

Other aspects of the work include integrating and fielding the algorithms, and once an algorithm is on a platform it must be optimized over its lifecycle, Cukor said.

AI Arms Race
“We are in an AI arms race,” Cukor said. ” . . . It’s happening in industry [and] the big five Internet companies are pursuing this heavily. Many of you will have noted that Eric Schmidt [executive chairman of Alphabet Inc.] is calling Google an AI company now, not a data company.”

The colonel described the technology available commercially, the state-of-the-art in computer vision, as “frankly . . . stunning,” thanks to work in the area by researchers and engineers at Stanford University, the University of California-Berkeley, Carnegie Mellon University and Massachusetts Institute of Technology, and a $36 billion investment last year across commercial industry.

“No area will be left unaffected by the impact of this technology,” he added.

For now, many tasks, like computer vision, are ready for AI capabilities and many are not, Cukor said, noting that “AI will not be selecting a target [in combat] . . . any time soon. What AI will do is compliment the human operator.”

Before deploying algorithms to combat zones, Cukor said, “you’ve got to have your data ready and you’ve got to prepare and you need the computational infrastructure for training.”

Also needed are algorithm developers and software engineers, he said, an interface must be developed between AI and human operators, and ultimately integration and optimization will be needed over the deployment lifecycle.

“All of these things have got to be put in harmony over the next 36 months as we move down this path,” Cukor said.