Friday, December 27, 2024

AI co-pilot enhances human precision for safer aviation

Share

Imagine you are in a plane with two pilots, one human and one computer. They both keep their “hands” on the controllers, but they are always paying attention to other things. If they are both paying attention to the same thing, then the man takes over. However, if the human gets distracted or misses something, the computer quickly takes over.

Meet Air-Guardian, a system developed by scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). While up-to-date pilots struggle with information from multiple monitors, especially at critical moments, Air-Guardian acts as a proactive co-pilot; a human-machine partnership rooted in the understanding of attention.

But how exactly does this determine attention? For humans, it uses eye tracking, and for the nervous system, it relies on so-called “saliency maps” that indicate where attention is directed. Maps serve as visual guides that highlight key areas of the image, helping you capture and decipher the behavior of complicated algorithms. Air-Guardian identifies early signs of potential risk using these attention indicators, rather than only intervening in the event of safety violations as is the case with customary autopilot systems.

The wider implications of this system extend beyond aviation. Similar collaborative control mechanisms could one day be applied to cars, drones and a broader spectrum of robotics.

“The exciting thing about our method is its diversity,” says Lianhao Yin, a postdoc at MIT CSAIL and lead author of the up-to-date study article about Air-Guardian. “Our collaboration layer and the entire process from start to finish is trainable. We specifically chose a continuous-depth causal neural network model due to its animated features in attention mapping. Another unique aspect is its adaptability. The Air-Guardian system is not stiff; it can be adjusted according to the requirements of the situation, ensuring a balanced partnership between man and machine.

In field tests, both the pilot and the system made decisions based on the same raw images when navigating to a target landmark. Air-Guardian’s success was measured by cumulative rewards earned per flight and shorter distance to the landmark. The guard reduced the risk level of flights and increased the efficiency of navigation to destinations.

“This system represents an innovative approach to human-centric aviation using artificial intelligence,” adds Ramin Hasani, MIT CSAIL research associate and inventor of fluid neural networks. “Our use of fluid neural networks provides a dynamic and adaptive approach, ensuring that artificial intelligence not only replaces but complements human judgment, leading to increased safety and cooperation in the airspace.”

Air-Guardian’s real strength is its core technology. Using an optimization-based cooperative layer that leverages the visual attention of humans and machines, and seamless, closed-loop continuous-time neural networks (CfC), known for their ability to decipher cause-and-effect relationships, it analyzes incoming images for relevant information. This is complemented by the VisualBackProp algorithm, which identifies a system’s focal points in an image, ensuring a clear understanding of its attention maps.

For future mass adoption, the human-machine interface needs to be improved. Feedback suggests that an indicator such as a bar may be more intuitive in signaling when the caregiver system is taking over.

Air-Guardian heralds a up-to-date era of safer skies, offering a reliable safety net in moments when people’s attention wanes.

“The Air-Guardian system highlights the synergy between human knowledge and machine learning, contributing to the goal of using machine learning to improve pilot performance in challenging scenarios and reduce operational errors,” says Daniela Rus, Andrew (1956) and Erna Viterbi Professor in Electrical Engineering and Computer Science from MIT, director of CSAIL and lead author of the paper.

“One of the most interesting results of using visual attention measurement in this work is the ability to enable earlier interventions and greater interpretability for pilots,” says Stephanie Gil, an assistant professor of computer science at Harvard University, who was not involved in the work. “This is a perfect example of how artificial intelligence can be used to work with humans, lowering the barrier to gaining trust by using natural communication mechanisms between humans and the artificial intelligence system.”

This research was funded in part by the United States Air Force (USAF) Research Laboratory, USAF Artificial Intelligence Accelerator, Boeing Co. and Office of Naval Research. The findings do not necessarily reflect the views of the U.S. Government or the USAF.

Latest Posts

More News