An MQ-9 Reaper Unmanned Aerial Vehicle flies a combat mission over southern Afghanistan in a photo from 2008. Currently, drones are operated by humans in the Air Force. (Source: Wikipedia via the U.S. Air Force)

Some Rattled by U.S. Air Force Drone Scenario of Potential Problems

How do you feel about being part of a phase in intelligence that AI is about to make obsolete? Well, according to someone known as one of the Godfathers of AI, we are in that phase.

This week two articles report on an event with the U.S. Air Force that may or may not have happened. And of course, it involves AI.

Usatoday.com and theguardian.com reported stories about confusion regarding a purported AI test drone. The unverified story is that a virtual drone killed its operator to complete its mission and receive the points it was trained to go after.

An official said last month that in a virtual test staged by the U.S. military, an Air Force drone controlled by AI had used “highly unexpected strategies to achieve its goal.” The story mushroomed on social media based on misinterpreted comments from an Air Force colonel at a seminar in London last month.

Col Tucker “Cinco” Hamilton described a simulated test in which a drone powered by AI was advised to destroy an enemy’s air defense systems and ultimately attacked anyone who interfered with that order.

However, Hamilton was only speaking about a hypothetical scenario to illustrate the potential hazards of AI, according to the Air Force. It did not conduct such a simulation with a drone.

“The Department of the Air Force has not conducted any such AI drone simulations and remains committed to ethical and responsible use of AI technology,” Ann Stefanek, an Air Force spokesperson, said in a statement

Colonel Misspoke

While the press still has not sorted out the story, theguardian.com had this to say about what may or may not have happened. And while it was reported to be a simulated test that Hamilton had been speaking about it still was unsettling for many in the Air Force hierarchy.

The AI the drone was trained on involved getting points for the algorithm. And when the AI figured out the ultimate way to accomplish its mission, it meant destroying the communications tower used in the test.  As well as the bunker with its operator. Hypothetically anyway.

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blog post.

Finally, it was summed up this way by the officer in an interview last year with Defense IQ. Hamilton said:

“AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.

“We must face a world where AI is already here and transforming our society. AI is also very brittle, ie it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.”

read more at theguardian.com

or at usatoday.com