The U.S. Air Force is planning to use algorithms to fly autonomous jet fighters. (Source: Air Force Research Laboratory)

Air Force Plans to Use Human Pilots Along with Autonomous Controls for Future Air Defense

On the surface, it might seem like a good idea. The U.S. Air Force has announced it wants to build up future air defenses with AI flying fighter jets in tandem with humans flying fighter jets. The article on nationaldefensemagazine.org describes failed tests that opened the Air Force’s eyes to how it plans to proceed

According to Secretary of the Air Force Frank Kendall and Gen. Mark Kelly, commander of Combat Air Command, the combination of algorithms and human pilots could be a nearly unbeatable combination.

Both men have pitched the Air Force’s Next-Generation Air Dominance program as a package deal of crewed and uncrewed systems. While the collaborative combat aircraft program isn’t funded to start until 2024, industry executives said they are gearing up their autonomous capabilities to expand the potential for manned and unmanned teaming.

And General Kelly said he expects private companies to be involved in putting the fighter jet design teams together.

“I’m an advocate to iterate our way there because I think there’s so much we don’t know,” he said during a media roundtable at the Air and Space Forces Association’s annual conference in National Harbor, Maryland.

Operational tests for the collaborative combat aircraft will take place in two or three years, General Kelly said.

The industry needs to participate in the experimentation that will shape the autonomous capabilities, said Mike Atwood, senior director, of the advanced programs group at General Atomics Aeronautical Systems.

“I think that will be maybe the Sputnik moment of cultural change, where we realize when we saw … F-22 and F-35s in the range, how challenging it is to go against that,” Atwood said during a panel at the conference.

One area for the industry to navigate alongside the Air Force is how it will face other AI-based systems, he said during a panel discussion at the conference. That challenge could shape the ethical limits of autonomous systems.

Using the Best Industry-Produced AI

A new advancement in autonomous capabilities with potential for future AI-controlled aerial vehicles is reinforced learning, he said. Using algorithms, an operator can define the world the machine is allowed to operate in and give it a set of actions. The device can then self-learn all the possible combinations of those actions in the set environment.

This type of learning could be reassuring to those with concerns about AI, especially as the military begins to test its largest class of unmanned aerial vehicles, he said. Setting the limits of what the machine can do can be comforting, but it still allows the system to innovate, Atwood said.

And with the combined training of human pilots, there is a more precise picture emerging that has the military very excited about future abilities in confrontations with our enemies.

The Air Force and industry need to discuss where the range of acceptable behavior is for AI and how to build trust within that range, Gen Kelly said.

read more at nationaldefensemagazine.org