Neural networks have had a seismic impact on the way engineers design robot controllers, catalyzing more adaptive and competent machines. Still, these brain-like machine learning systems are a double-edged sword: Their complexity makes them powerful, but it also makes it tough to ensure that a robot powered by a neural network will safely complete its task.
The conventional way to verify safety and stability is with techniques called Lyapunov functions. If you can find a Lyapunov function whose value is constantly decreasing, you can know that hazardous or unstable situations associated with higher values will never occur. However, in the case of robots controlled by neural networks, previous approaches to verifying Lyapunov conditions did not scale well to sophisticated machines.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and elsewhere have developed novel techniques that rigorously certify Lyapunov computations in more sophisticated systems. Their algorithm efficiently finds and verifies the Lyapunov function, providing assurances of system stability. This approach could potentially enable safer deployment of robots and autonomous vehicles, including aircraft and spacecraft.
To outperform previous algorithms, the researchers found a cost-effective shortcut to the training and verification process. They generated cheaper counterexamples—for example, adversarial sensor data that could trip up the controller—and then optimized the robotics system to account for them. Understanding these edge cases helped the machines learn how to handle challenging circumstances, allowing them to operate safely under a wider range of conditions than previously possible. They then developed a novel verification formulation that uses a scalable neural network verifier, α,β-CROWN, to provide tough worst-case guarantees that go beyond counterexamples.
“We’ve seen impressive empirical results in AI-controlled machines like humanoids and robotic dogs, but these AI controllers lack the formal guarantees that are essential for safety-critical systems,” says Lujie Yang, a doctoral candidate in electrical engineering and computer science (EECS) at MIT and a CSAIL collaborator who co-authored a novel paper on the project with Toyota Research Institute researcher Hongkai Dai SM ’12, PhD ’16. “Our work combines this level of performance from neural network controllers with the safety guarantees needed to implement more complex neural network controllers in the real world,” Yang notes.
In a digital demonstration, the team simulated how a quadcopter drone with lidar sensors stabilizes itself in a two-dimensional environment. Their algorithm successfully guided the drone to a stable hovering position using only the circumscribed environmental information provided by the lidar sensors. In two other experiments, their approach enabled two simulated robotic systems to operate stably under a wider range of conditions: an inverted pendulum and a path-following vehicle. These experiments, while modest, are relatively more sophisticated than what the neural network validation community has been able to do before, especially because they involved sensor models.
“Unlike typical machine learning problems, rigorous use of neural networks as Lyapunov functions requires solving difficult global optimization problems, and thus scalability is a key bottleneck,” says Sicun Gao, assistant professor of computer science and engineering at the University of California, San Diego, who was not involved in this work. “The current work makes an important contribution by developing algorithmic approaches that are much better suited to the specific application of neural networks as Lyapunov functions in control problems. It achieves impressive improvements in scalability and solution quality compared to existing approaches. The work opens exciting directions for further development of optimization algorithms for neural Lyapunov methods and rigorous application of deep learning in control and robotics in general.”
Yang and her colleagues’ approach to stability has potentially broad applications where safety assurance is key. It could lend a hand ensure a smoother ride for autonomous vehicles like planes and spacecraft. Similarly, a drone delivering items or mapping terrain could benefit from such safety assurances.
The techniques developed here are very general and not specific to robotics; the same techniques may find application in other fields in the future, such as biomedicine and industrial processing.
While the technique is an improvement over previous work in terms of scalability, the researchers are exploring how it can perform better in higher-dimensional systems. They also want to include data beyond lidar readings, such as images and point clouds.
As a future research direction, the team would like to provide the same stability guarantees for systems that are in uncertain environments and are subject to interference. For example, if a drone encounters a robust gust of wind, Yang and her colleagues want to make sure it will still fly stably and perform the desired task.
They also plan to apply their method to optimization problems, where the goal is to minimize the time and distance a robot needs to complete a task while remaining stable. They plan to extend their technique to humanoids and other real-world machines, where a robot must remain stable while interacting with its environment.
Russ Tedrake, MIT professor of EECS, aerospace and mechanical engineering at Toyota, vice president of robotics research at TRI and a member of CSAIL, is the senior author of the study. The paper also names UCLA doctoral student Zhouxing Shi and associate professor Cho-Jui Hsieh, and UCLA assistant professor Huan Zhang. Their work was supported in part by Amazon, the National Science Foundation, the Office of Naval Research and the AI2050 program at Schmidt Sciences. The researchers’ paper will be presented at the 2024 International Conference on Machine Learning.