A QinetiQ Titan robot fitted with a Javelin anti-tank missile launcher, one example of an AI-driven robot, was featured on BreakingDefense.com.

DARPA Experiments with Robots & Drones on Simulated Battlefields with Future in Mind

Last summer, the Defense Advanced Research Projects Agency (Darpa) ran exercises south of Seattle with AI, instructing drones and robots to take out terrorists hiding in buildings. The AI exercise was a simulation with radio controls, but as WIRED Magazine reports, it’s a major step forward in the Pentagon’s move towards letting AI make decisions in battle.

The U.S. military believes that AI-operated machines can outperform humans at addressing certain combat situations and in making fast, accurate decisions.

“General John Murray of the U.S. Army Futures Command told an audience at the US Military Academy last month that swarms of robots will force military planners, policymakers, and society to think about whether a person should make every decision about using lethal force in new autonomous systems. Murray asked: ‘Is it within a human’s ability to pick out which ones have to be engaged’ and then make 100 individual decisions? ‘Is it even necessary to have a human in the loop?’ he added.”

Other military strategists are questioning the value of keeping humans involved in making killing decisions, too, according to the piece written by Will Knight.

“Timothy Chung, the Darpa program manager in charge of the swarming project, says last summer’s exercises were designed to explore when a human drone operator should, and should not, make decisions for the autonomous systems. For example, when faced with attacks on several fronts, human control can sometimes get in the way of a mission, because people are unable to react quickly enough. ‘Actually, the systems can do better from not having someone intervene,’ Chung says.”

The exercises come at a time when the National Security Commission on Artificial Intelligence (NSCAI), an advisory group created by Congress, has recommended against U.S. cooperation with an effort to ban the development of AI weapons internationally.

read more at wired.com