30 Countries Sign onto Campaign to Stop Use of Autonomous Killing Machines
Human Rights Watch (HRW) recently announced that it has commitments from 30 countries to push for an international treaty to ban killer robot use, the culmination of eight years of work, according to a story on vice.com. It compiled a report that shows how 97 countries favor some version of the treaty:
“The report…says most of them want to ‘retain human control over the use of force.’ Additionally, a growing number of policymakers, artificial intelligence experts, private companies, international and domestic organizations, and ordinary individuals have also endorsed the call to ban fully autonomous weapons. The authors explain that autonomous weapons ‘would decide who lives and dies, without … inherently human characteristics such as compassion that are necessary to make complex ethical choices.’ “
The only real hitch is that the countries making drones and war robots, primarily the United States, Russia and China, have shown no interest in signing such a treaty.
The HRW’s Campaign to Stop Killer Robots seeks to get the United Nations to work on drafting an international treaty and assist in getting it signed. The UN Secretary-General, Antonio Guterres, has already pledged his support, but the global group is unlikely to meet for some months.
According to a Wired magazine story, the idea of machines being given free rein to kill people should send a chill down your spine:
“Militaries have a compelling reason to keep humans involved in lethal decisions. For one thing, they’re a bulwark against malfunctions and flawed interpretations of data; they’ll make sure, before pulling the trigger, that the automated system hasn’t misidentified a friendly ship or neutral vessel. Beyond that, though, even the most advanced forms of artificial intelligence cannot understand context, apply judgment, or respond to novel situations as well as a person. Humans are better suited to getting inside the mind of an enemy commander, seeing through a feint, or knowing when to maintain the element of surprise and when to attack.”
No one wants a computer glitch to wipe out a city of innocent civilians. Some in the tech world have already figured out that unleashing killer robots is a terrible idea. Laura Nolan, a former Google engineer, quit her job last year in protest of working on making the U.S. Defense Department’s Maven Project killer drones more efficient at doing their lethal hunting of victims. Nolan told theguardian.com that killer robots which aren’t guided by human remote control should be outlawed, just like chemical weapons. “(Killer robots can do) calamitous things that they were not originally programmed for.”
“There could be large-scale accidents because these things will start to behave in unexpected ways,” Nolan said. “Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.”
The end isn’t near, yet, but the Wired piece predicts that it’s only a matter of time before the robot defense devices will be fully capable of acting on their own, and far more effectively than with human control:
“Military scholars in China have hypothesized about a ‘battlefield singularity,’ a point at which combat moves faster than human cognition. In this state of ‘hyperwar,’ as some American strategists have dubbed it, unintended escalations could quickly spiral out of control. The 2010 ‘flash crash’ in the stock market offers a useful parallel: Automated trading algorithms contributed to a temporary loss of nearly a trillion dollars in a single afternoon. To prevent another such calamity, financial regulators updated the circuit breakers that halt trading when prices plummet too quickly. But how do you pull the plug on a flash war?”
read more at vice.com