Media coverage of Project Maven poses the one question Google never wanted you to ask: might Google help pave the way for lethal AI? Graphic by Seeflection.

Google’s Worst Fears: Growing Backlash in Company

Coverage by The New York Times last week revealed how one of Google’s brightest minds, Dr. Fei Fei Li, warned of potential fallout from the company’s involvement with Project Maven last year. Citing leaked internal communications and anonymous Google employees, the Times portrayed a company divided as never before between executive leadership prioritizing the company’s growth and downplaying Project Maven risks, and a growing chorus of dissenting voices from Google’s rank-and-file who believe that the DoD contract represents a breach of Google’s unique corporate culture and commitment to not “be evil.”

Project Maven is a controversial partnership between the Department of Defense and Silicon Valley allowing military use of private sector AI technology including open-source Google computer vision algorithms for drone surveillance analysis. In the disastrous wake of negative press against Google following public attention on Project Maven, the company is facing exactly the public and internal backlash Dr. Li sought to avoid, drawing criticism from the tech research community and sparking an “existential crisis” within the company.

Employees have expresses a variety of viewpoints on the issue, with some protesting any and all DoD involvement in principle and others taking a more measured view on the nonlethal Project Maven, worried instead that the project will cede control of powerful Google products away from developers, perhpas accelerating private and/or defense development of AI developed for lethal uses.

While Google, according to the Times, has indeed extended some lukewarm efforts to bridge a growing rift “between scientists with deep moral objections and salespeople salivating over defense contracts,”  the company’s tepid response to the Project Maven controversy hasn’t stopped multiple employees from resigning and thousands of others from formally and informally protesting their employer over the complex, ethically fraught dilemmas of Project Maven.

Google’s “Googleplex” HQ in Mountain View, CA. Via user The Pancake of Heaven! / Wikimedia Commons.

Project Maven was publicly established April 2017 and received some coverage including an excellent overview of the program by the Bulletin of the Atomic Scientists, but remained relatively unknown even within Google until the explosive publication of the Gizmodo article “Google Is Helping the Pentagon Build AI for Drones” in March 2018.

The Gizmodo exposé thrust Google’s Project Maven involvement into the limelight and garnered a flurry of mainstream coverage in the weeks and months since—mostly criticizing Google for Project Maven involvement, its backlash against its employees, or both.

The news of Project Maven prompted widespread protests from the technology and research community—including Google employees—as well as among the mainstream press. Shortly after the Gizmodo story, Google employees penned an open letter to Google CEO Sundar Pichai requesting “that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.” This internal letter has garnered about 4,000 signatures, including some of Google’s top engineering and research talent.

Additionally, the Project Maven revelation sparked high-profile outside petitions against Google. Researchers affiliated with the International Committee for Robot Arms Control—which also spearheads the Campaign to Stop Killer Robots—released an open letter in support of dissenting Google employees, which has gained more than 1100 signatures including some of the world’s top academic researchers in AI, computer science, and ethics. Among many luminary signatories of the petition is Lord Anthony Giddens, who wrote in May calling for a 21st century “Magna Carta” to check the tech industry’s power and govern harmful uses of technology including weaponized AI.

A group of Silicon Valley employees describing themselves online as the “Tech Workers Coalition” also wrote a similar letter of solidarity with Google employees, stating that “we as tech workers must adopt binding ethical standards for the use of AI that will let us build the world we believe in” and requesting that “Google should break its contract with the Department of Defense.”

Finally, about a dozen Google employees resigned in an act of protest against their company, concerned that “executives have become less transparent with their workforce about controversial business decisions and seem less interested in listening to workers’ objections than they once did,” according to Gizmodo. One employee anonymously stated that “over the last couple of months, I’ve been less and less impressed with the response and the way people’s concerns are being treated and listened to.”

The NYT article’s insights are based primarily on leaked email correspondences from Dr. Fei Fei Li, a brilliant AI researcher who worked her way from obscurity as a Chinese immigrant at age 16 to her present role at the avant garde of AI research. in addition to her role of Chief Scientist of AI  at Google Cloud—the division of Google responsible for Project Maven development— Dr. Fei Fei Li also heads Stanford University’s Artificial Intelligence Lab as well as its specialized Computer Vision Lab.


Above: Dr. Fei Fei Li presenting her work in computer vision AI in 2015. One of the cornerstones of modern AI advancements, such image recognition systems were contracted from Google by the DoD in Project Maven for use in drone footage analysis, with potentially lethal consequences.

Dated from September 2017, Dr. Li’s correspondences involve a communication prompted by Google’s head of defense projects, Scott Frohman, who posed to Dr. Li the “burning question” of how to present Project Maven to the public, fearing that the project might, obviously, pose a hard sell. Dr. Li stressed in her communications with Frohman that Google should “avoid at ALL COSTS any mention or implication of AI.”

While the computer vision systems currently employed in combat zones by Project Maven are indeed a form of AI, in the public mind “AI” is often mistaken for self-aware systems and autonomous robotics, the much-maligned “killer robot” trope popularized by films such as 2001: A Space Odyssey, I, Robot, Stealth, the Terminator series, and other sci-fi tales of weaponized AI gone rogue.

However, “killer robots”—or lethal autonomous weapons systems in AI lingo—do pose a genuine threat to humanity, and the technology is a risk taken seriously by a growing contingent of AI and robotics researchers concerned over these systems’ ethical implications, including uses in warfare. In 2015, the Future of Life Institute released an open letter calling for an international ban on future development of weaponized AI, and endorsed by top AI minds including Elon Musk and even Google’s own long-time employee and head of AI, Jeff Dean.

Dr. Fei Fei Li is certainly well versed on the troubling implications of autonomous weapons any military-related AI, and the leaked emails reveal her concern that Project Maven would be conflated in the public mind with autonomous weapons development:

“Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google. […] I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.”

The media did run with the “red meat” of implicating Google in the development of military AI. The increasing deadly use of armed drones has maligned military robotics use in the public eye, and association of Google with the military drone program in the press spells a toxic PR nightmare for Google. Worsening the problem, much of the sensationalized Project Maven coverage and the slew of Google criticism on social media suggests that many believe inaccurately that Google’s “AI” technology is employed in armed drones or is used to control them.

In response to the email leak, Dr. Li emphasized in a statement to the Times that, “I believe in human-centered AI to benefit people in positive and benevolent ways. It is deeply against my principles to work on any project that I think is to weaponize AI.” Dr. Li’s statement suggests, at best, a tone-deafness to the negative perceptions of Project Maven (even proponents of Project Maven would be hard-pressed to say that it “benefits people in positive and benevolent ways”) or at worst is exemplary of the “nerd-sighted” perspective of many Silicon Valley contemporaries, unable or unwilling to see the far-reaching implications of their innovations , no matter how brilliant.

While Google’s Project Maven contract is fairly limited in scope and will only be used in non-lethal applications not involving weaponized and/or autonomous AI—points that Google executives have belabored to the press and  employees alike—and may even help minimize the carnage of war by allowing for better target discrimination and saving the lives of allied troops and civilians alike, Dr. Li and her cohorts among Google’s top ranks miss the foundational critiques of Project Maven: that the technology needn’t necessarily be directly connected to weapons or used autonomously to cause harm and raise valid concerns over its use and potential misuses.

As detailed in an earlier piece about Project Maven’s potential current and future applications, the technology has sobering risks that must be considered. Even narrow AI such as Google’s Project Maven contribution can be used beyond the initial limited scope of its contract. The same or similar technology may be, for instance, contracted by other federal agencies for use on American soil, and if Google’s Project Maven AI is indeed successful then its AI or a government derivative could very easily be used on armed platforms.

Furthermore, the technology might lead to disastrous unintended consequences Dr. Li and others have not responsibly forecast. Designed and trained for research and private use with little life-or-death consequences, Google’s Project Maven AI may have critical vulnerabilities, biases, or flaws that could indirectly lead to wrongful combat zone deaths or exploitation by adversaries, especially likely given that Project Maven was explicitly designed by the DoD to streamline AI deployment and may very well have been prematurely rushed into combat zone use.

Finally, even if nothing goes wrong and Google’s AI is only used in a manner limited to its intended “non-lethal” purpose, the technology will result in combat deaths as well as potential lives saved, a potential conflict of conscience for some civilian Google employees who did not foresee developing military technology.

A variety of Silicon Valley companies contract to the DoD, and some—such as Palantir—have built their entire brand on work for defense or intelligence projects. Google, however, has historically been less keen to work with the government and presents itself as a more humanistic globally oriented company with values born of its now-defunct motto “Don’t be Evil.”

Even if Project Maven were assumed for argument’s sake to be legitimate, necessary, or even laudable for its military application, the fact that Google’s involvement stands in such contrast to its corporate ethos is a betrayal of the deeply held values many workers specifically sought at Google. In the cutthroat free market of the technology industry, hosts of competing companies such as Microsoft and Amazon with more established DoD relationships can offer their own computer vision AI to projects such as Maven: is Google’s highest allegiance to its employees and ethics, or to its bottom line? And what of Google’s civic responsibilities to assist US troops or, conversely oppose what many see as a growing military-industrial complex?

According to the NYT article, Google held an invitation-only internal conference on these issues, hearing from a variety of opinions from within the company about Project Maven concerns and potential ways forward:

Google, according to the invitation email, decided to hold a discussion on April 11 representing a “spectrum of viewpoints” involving Ms. Greene; Meredith Whittaker, a Google A.I. researcher who is a leader in the anti-Maven movement; and Vint Cerf, a Google vice president who is considered one of the fathers of the internet for his pioneering technology work at the Defense Department. According to employees who watched the discussion, Ms. Greene held firm that Maven was not using A.I. for offensive purposes, while Ms. Whittaker argued that it was hard to draw a line on how the technology would be used.

Finally, Google promised employees that it would draft a coherent standard of guiding principles for future AI development and defense contracting, including a clause which “precluded the use of AI in weaponry” according to sources in contact with the New York Times. Google employees expect the new guidelines to be announced within the upcoming weeks.