US aims to stay ahead of China in using AI to fly fighter jets, navigate without GPS and more

WASHINGTON (AP) – Two Air Force fighter jets recently got into a dogfight in California. One was flown by a pilot. The other one was not.

The second jet was piloted by an artificial intelligence, with the highest ranked civilian air force riding in the front seat. It was the ultimate demonstration of how far the Air Force had come in technological development with its roots in the 1950s. But it’s just a hint of the technology yet to come.

The United States is competing to stay ahead of China on AI and its use in weapons systems. The focus on AI has created public concern that wars will be fought without direct human intervention by machines that select and engage targets. Officials say this will not happen, at least not on the US side. But there are questions about what a potential adversary would allow, and the military sees no other option but to quickly gain access to US capabilities.

“Whether you want to call it a race or not, sure,” said Air Force One Christopher Grady, vice chairman of the Joint Chiefs of Staff. “We both recognize that this will be a critical aspect of the battlefield going forward. China is working on it as hard as we are.”

A look at the history of military AI development, what technologies are on the horizon and how they will be kept in check:

FROM MACHINE LEARNING TO AUTISM

AI’s roots in the military are a hybrid of machine learning and autonomy. Machine learning occurs when a computer analyzes data and sets of rules to draw conclusions. Autonomy occurs when those conclusions are applied to take action without further human input.

This happened in the early 1960s and 1970s when the Navy’s Aegis missile defense system was developed. Aegis was trained through a series of human-programmed if/then rule sets to be able to independently detect and intercept incoming missiles, and faster than a human could. But the Aegis system was not designed to learn from his decisions and his actions were limited to the rule he had set.

“If a system uses ‘if/then’ it’s probably not machine learning, which is an area of ​​AI that involves creating systems that learn from data,” said Air Force Lt. Col. Christopher Berardi, who is assigned to the Massachusetts Institute of Technology to help develop Air Force AI.

AI took a big step forward in 2012 when the combination of big data and high computing power enabled computers to start analyzing the information and writing the rules themselves. It’s what AI experts call the “big bang.”

The new data created by a computer that writes the rules of artificial intelligence. Systems can be programmed to act autonomously from the conclusions drawn from typewritten rules, a form of AI-enabled autonomy.

TESTING AN ALTERNATIVE TO GPS NAVIGATION

Air Force Secretary Frank Kendall got a taste of advanced warfare this month when he flew Vista, the first AI-controlled F-16 fighter jet, in a dog exercise at Edwards Air Force Base in California.

While that jet is the most visible sign of the AI ​​work underway, there are hundreds of AI projects underway across the Pentagon.

At MIT, service members worked to sift through thousands of hours of recorded pilot conversations to create a dataset from the flood of messages exchanged between crews and air operations centers during flights, so the AI ​​could tell the difference between critical messages. like learning to close a runway. and mundane cockpit chatter. The goal was for the AI ​​to learn which messages are critical to ensure controllers see them faster.

In another notable project, the military is working on an AI alternative to GPS-dependent satellite navigation.

In the future war it is likely that high-value GPS satellites will be hit or disrupted. Loss of GPS could blind US communications, navigation and banking systems and render the US military’s fleet of aircraft and warships unable to coordinate a response.

So last year the Air Force flew an AI program — loaded onto a laptop attached to the floor of a C-17 military cargo plane — to work on a different solution using Earth’s magnetic fields.

It is known that aircraft could navigate by following the Earth’s magnetic fields, but until now it was not practical because each aircraft generates so much of its own electromagnetic noise that there was no good way to filter for the Earth’s emissions only.

“Magnetometers are very sensitive,” said Col. Garry Floyd, director of the Air Force-MIT Artificial Intelligence Accelerator program. “If you turned on the strip lights on C-17 you would see it.”

The AI ​​learned through flights and reams of data that indicate what to ignore and what to follow and the results were “very, very impressive,” Floyd said. “We’re talking tactical airdrop quality.”

“We think we might have added an arrow to the scope of what we can do, if we were to work in an environment that has finally been rejected by GPS. Which we will do,” Floyd said.

The AI ​​has so far only been tested on the C-17. It will also be tested on other aircraft, and if it works it could give the military another way to work if GPS goes down.

SAFETY RAILS AND PILOT SPEAKS

Vista, the AI-controlled F-16, has significant safety rails while being trained by the Air Force. There are mechanical limitations that keep the still-learning AI from executing maneuvers that would endanger the plane. There is also a safety pilot, which can take control from the AI ​​at the push of a button.

The algorithm can’t learn on the fly, so it only gradually builds on the data sets and rules it has created from previous flights. When a new flight is over, the algorithm is switched back to a simulator where it is given new data collected during the flight to learn from, create new rule sets and improve its performance.

But the AI ​​is learning fast. Because of the super-computing speed that AI uses to analyze data, and then fly those new rule sets in the simulator, its speed to find the most efficient way to fly and maneuver has beaten some human pilots already in dog fighting exercises.

But safety remains a critical concern, and officials said the most important way to keep safety in mind is to control the data fed into the simulator to learn the AI. In the case of the jet, it is ensuring that the data shows a safe flight. Ultimately, the Air Force hopes that a version of the AI ​​being developed can serve as the brains of a fleet of 1,000 unmanned aircraft being developed by General Atomics and Anduril.

In the AI ​​training experiment on how pilots communicate, service members assigned to MIT scrubbed the recordings to remove classified information and the pilots’ sometimes salty language.

Learning how pilots communicate is “an expression of command and control, of how pilots think. The machines have to understand that too if they’re going to be very good,” said Grady, the vice chairman of the Joint Chiefs. “They don’t need to learn how to cuss.”

Leave a Reply

Your email address will not be published. Required fields are marked *