Google Supplies AI for Drone Surveillance, Raising Ethical Concerns
Google is sharing AI technology with the military to improve drone surveillance, to the surprise of Google employees, including some who disagree with the company’s actions.
According to Gizmodo, Google is currently providing computer vision AI for a Project Maven, a Department of Defense initiative to employ AI technology in identifying objects in drone feeds. While the company claims that its AI partnership with the DoD is limited to “non-offensive” roles, the move is bound to raise ethical concerns among Google employees and customers who find issue with Google’s military products. The company’s move reportedly “set off a firestorm among employees” when details of the program were recently revealed on an internal mailing list.
The DoD’s Project Maven —also known in defense-ese as the Algorithmic Warfare Cross-Functional Team (AWCFT)— is a government program founded in 2017 to “accelerate DoD’s integration of big data and machine learning” to “[enable] the automated detection and identification of objects in as many as 38 categories captured by a drone’s full-motion camera” and provide “the ability to track individuals as they come and go from different locations.” Project Maven is part of broader DoD initiatives to contract AI talent in academia and the private sector to assist in the development of military AI. Gizmodo estimates that the military has spent $7.4 billion on AI development in 2017.
According to a Google spokesperson who provided more detail about Google’s partnership with Project Maven, “this specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data.” Google’s DoD offerings likely use the same or similar TensorFlow libraries as the company’s powerful object-recognition and image classification AI, such as Inception. Google’s spokesperson said, “…the technology flags images for human review, and is for non-offensive uses only.” The company is “actively discussing” the issue internally while developing “policies and safeguards” around the use of machine learning technologies for the military.
Google spokesperson Miles Dyson stated, “this Google/Pentagon research project is only that: A research project to assess the viability of using AI for analyzing drone footage. There are no plans to implement any results of this project into AI controlled military hardware.”
The Gizmodo article also cites a revealing keynote presentation for a Center for a New American Security summit on AI and global security in November, where former Executive Chairman of Google and parent company Alphabet Inc. Eric Schmidt addressed criticism from the tech community, “of somehow the military-industrial complex using their stuff to kill people incorrectly.”
Schmidt implied that such attitudes were “related to the history of the Vietnam War and the founding of the tech industry,” but that he doesn’t think it should be a “big concern.” He cited the specifics of the procurement process for government contracts, assuring that the military will, “use this technology to help keep the country safe.”
Google’s technology could help relieve strain on heavily-burdened intelligence analysts, drone operators, and other members of the military active in the Global War on Terror, which is nearing its 17th year. By potentially improving the accuracy of ISR —Intelligence, Surveillance and Reconnaissance— analysis beyond human abilities, Google’s Project Maven research may cut through the proverbial and literal “fog of war.”
Improved object recognition capabilities for ISR could keep American troops in combat zones safer and more aware of the battle space, and furthermore may save additional lives by minimizing costly mistakes leading to friendly fire or the spate of accidental civilian deaths which have plagued America’s controversial drone program in the past decade. According to the company, Project Maven technology is already being deployed to support operations against the brutal ISIS regime.
Even if not directly employed on armed combatant drones, Google’s technology could still be used indirectly to identify, target and eventually kill people, a troubling ethical gray area that may conflict with deeply-held moral positions of the company’s employees as well as the public. (After all, Google’s motto from the start was, “Do No Evil.”) Additionally, even if designed for peaceable or at least ostensibly non-lethal applications, Google’s Project Maven developments could eventually find their way to controversial domestic surveillance programs, with drones already being used on American soil to surveil manhunts, protests and the U.S. Border (perhaps even including the 66% of the American population living within the 100-mile “extended border” under CBP jurisdiction).
Given the potential benefits of military-tailored AI to improve the efficacy of American forces—including minimizing civilian deaths—as well as to hedge against China’s growing expertise in military AI technology, cooperation between the tech industry and the military may ultimately be of benefit. Given the very real life-and-death implications of Google’s Project Maven technology, however, the tech giant ought to stay accountable and transparent with its employees, customers, and the broader public— especially to those who might have political or ethical misgivings about the company’s growing relationship with the DoD or simply prefer that the company stick to search engines and cell phones.
Leave A Comment