DeepMind’s website espouses a philosophy that is being tested with the creation of AI that can out-fly human pilots.

Algorithm Beats Human Pilot in Virtual F-16 Dogfight Simulation

Did you miss the report about the computer algorithm that defeated a top F-16 in a dogfight simulation? Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.

An F-16 Fighting Falcon simulation showed that AI could outmaneuver a human pilot. Source: U.S. Air Force)

This means virtual aerial war could become a reality, according to a story in Wired magazine, which noted in the first paragraph that this goes against the company’s stated goal to avoid using AI in war.

“In July 2015, two founders of DeepMind, a division of Alphabet with a reputation for pushing the boundaries of artificial intelligence, were among the first to sign an open letter urging the world’s governments to ban work on lethal AI weapons. Notable signatories included Stephen Hawking, Elon Musk, and Jack Dorsey.”

The war game shows just how far AI has come, built on models created by companies like DeepMind, and gives a hint of the direction this experiment is going to take for U.S. military powers. Others in AI are grappling with similar issues, as more ethically questionable uses of AI, from facial recognition to deepfakes to autonomous weapons, emerge.

These types of advancements have always been a possibility but had not yet become a reality. Well, reality checked in last week with the AI fighter pilot. Want more reality? Go to Boston Dynamics’ website and discover how unprepared a human is to deal with the latest version of robotic warriors. It’s no contest.

“A DeepMind spokesperson says society needs to debate what is acceptable when it comes to AI weapons. ‘The establishment of shared norms around responsible use of AI is crucial,’ she says. DeepMind has a team that assesses the potential impacts of its research, and the company does not always release the code behind its advances. ‘We take a thoughtful and responsible approach to what we publish,’ she added.”

One question would be who is responsible for setting up the rules or guidelines for robot wars?

“The technology is developing much faster than the military-political discussion is going,” says Max Tegmark, a professor at MIT and cofounder of the Future of Life Institute, the organization behind the 2015 letter opposing AI weapons.

Without an international agreement restricting the development of lethal AI weapons systems, Tegmark says, America’s adversaries are free to develop AI systems that can kill. “We’re heading now, by default, to the worst possible outcome,” he says.

U.S. military leaders—and the organizers of the AlphaDogfight contest—say they have no desire to let machines make life-and-death decisions on the battlefield.

The article is filled with the pros and cons of this latest move made by humans to incorporate AI into doing our dirty work. This is and should be very concerning to us all.

read more at wired.com