Thursday, December 26, 2024

Drones navigate unseen environments using seamless neural networks

Share

A up-to-date group of aviators begins to fly in the extensive, sweeping sky where birds once ruled. These aerial pioneers are not living creatures, but rather the product of intentional innovation: drones. But these aren’t your typical flying robots, buzzing like mechanical bees. Rather, they are bird-inspired wonders that soar through the sky, guided by fluid neural networks to navigate an ever-changing and imperceptible environment with precision and ease.

Inspired by the adaptive nature of organic brains, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced a method for hearty air navigation agents that allows them to perform vision-based fly-to-target tasks in elaborate, unknown environments. Fluid neural networks that can continuously adapt to up-to-date inputs have demonstrated effectiveness in making reliable decisions in unfamiliar areas such as forests, urban landscapes, and environments with additional noise, rotation, and occlusion. These versatile models, which outperform many state-of-the-art counterparts in navigation tasks, could enable potential real-world applications of drones such as search and rescue, delivery, and wildlife monitoring.

A recent study by scientists published today in , details how this up-to-date breed of agents can adapt to significant changes in distribution, a long-standing challenge in the field. However, the team’s up-to-date class of machine learning algorithms captures the causal structure of tasks based on high-dimensional, unstructured data, such as pixel input from a drone-mounted camera. These networks can then isolate key aspects of the task (i.e., understanding the task at hand) and ignore irrelevant features, allowing acquired navigation skills to seamlessly transfer goals to up-to-date environments.

Play the video

Drones navigate unseen environments using seamless neural networks.

“We are excited about the enormous potential of our learning-based approach to robot control, as it lays the foundation for solving problems that arise when trained in one environment and deployed in a completely different environment without additional training,” says Daniela Rus, CSAIL director, and Andrew (1956 ) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT. “Our experiments show that we can effectively train a drone to locate an object in the forest in summer, and then deploy the model in winter in a completely different setting, or even in an urban environment, performing a variety of tasks such as search and follow. This adaptability is made possible by the causal foundations of our solutions. These flexible algorithms could one day help make decisions based on time-varying data streams, such as medical diagnostics and autonomous driving applications.”

A daunting challenge has come to the fore: Can machine learning systems understand the tasks they are given based on data when flying drones into an unmarked facility? And would they be able to transfer the skills and tasks they have acquired to up-to-date environments with drastic changes in scenery, such as flying from a forest to an urban landscape? Moreover, unlike the extraordinary abilities of our biological brains, deep learning systems have difficulty capturing causality, often overfitting training data and failing to adapt to up-to-date environments or changing conditions. This is particularly concerning for resource-constrained embedded systems such as aerial drones, which must traverse a variety of environments and respond immediately to obstacles.

In turn, fluid networks offer promising preliminary indications of their ability to address this key weakness of deep learning systems. The team’s system was first trained on data collected by a human pilot to see how acquired navigation skills would transfer to up-to-date environments under drastic changes in scenery and conditions. Unlike established neural networks, which only learn during the training phase, the parameters of a fluid neural network can change over time, making them not only interpretable, but also more resistant to unexpected or boisterous data.

The team believes that the ability to learn from circumscribed expert data and understand a given task while generalizing to up-to-date environments can make the deployment of autonomous drones more productive, cost-effective and reliable. They noted that fluid neural networks could enable the employ of autonomous airborne drones for environmental monitoring, parcel delivery, autonomous vehicles and robotic assistants.

“The experimental setup presented in our work tests the reasoning capabilities of various deep learning systems in controlled and simple scenarios,” says Ramin Hasani, research partner at MIT CSAIL. “There is still a lot of room for future research and development on the more complex reasoning challenges of AI systems in autonomous navigation applications, which need to be tested before we can safely deploy them in our society.”

“Good learning and performance in non-distribution tasks and scenarios are some of the key issues that machine learning and autonomous robotic systems must overcome to continue to advance in mission-critical applications for society,” says Alessio Lomuscio, professor of artificial intelligence security at the Department of Computer Science, Imperial College London. “In this context, the performance of fluid neural networks, a novel brain-inspired paradigm developed by the MIT authors, presented in this study, is remarkable. If these results are confirmed in other experiments, the paradigm developed here will contribute to making artificial intelligence and robotic systems more reliable, robust and efficient.”

Apparently the sky is no longer the limit, but rather a extensive playground for the limitless possibilities of these aerial wonders.

Hasani and PhD student Makram Chahine; Patrick Kao ’22, MEng ’22; and graduate student Aaron Ray SM ’21 wrote the paper with Ryan Shubert ’20, MEng ’22; MIT postdoctoral fellows Mathias Lechner and Alexander Amini; and Rus.

This research was supported in part by Schmidt Futures, the U.S. Air Force Research Laboratory, the U.S. Air Force Artificial Intelligence Accelerator, and Boeing Co.

Latest Posts

More News