Friday, May 9, 2025

A better way to control shapeshifting supple robots

Share

Imagine a slime-like robot that can smoothly change its shape to squeeze through tight spaces, and which can be inserted into a human body to remove an unwanted object.

While such a robot does not yet exist outside the laboratory, researchers are working to develop reconfigurable supple robots for applications in healthcare, wearable devices, and industrial systems.

But how do you control a tender robot that has no joints, limbs or fingers to manipulate and can instead drastically change its entire shape at will? Scientists at MIT are working to answer this question.

Their method performed each of the eight tasks assessed, achieving better results than other algorithms. This technique worked particularly well for multi-faceted tasks. For example, in one test, the robot had to reduce its height while growing two diminutive legs to squeeze through a narrow pipe, then cut off those legs and straighten its torso to open the pipe’s lid.

While reconfigurable supple robots are still in their infancy, such a technique could one day create general-purpose robots that can adapt their shape to perform a variety of tasks.

“When people think of supple robots, they usually think of robots that are elastic but return to their original shape. Our robot is like slime and can actually change its morphology. It’s very striking that our method worked so well because we’re dealing with something completely novel,” says Boyuan Chen, a graduate student in electrical engineering and computer science (EECS) and co-author of the book article about this approach.

Chen’s co-authors include lead author Suning Huang, an undergraduate student at Tsinghua University in China who completed this work while a visiting student at MIT; Huazhe Xu, assistant professor at Tsinghua University; and senior author Vincent Sitzmann, an assistant professor of EECS at MIT who leads the Scene Representation Group in the Computer Science and Artificial Intelligence Laboratory. The research results will be presented at the International Conference on Learning Representations.

Dynamic motion control

Scientists often teach robots to perform tasks using a machine learning approach known as reinforcement learning, which is a process of trial and error in which the robot is rewarded for actions that bring it closer to a goal.

This can be effective when the robot’s moving parts are consistent and well-defined, such as a three-finger gripper. In the case of a robot gripper, a reinforcement learning algorithm can move one finger slightly, learning through trial and error whether that movement will earn it a reward. Then it moved on to the next finger and so on.

But shape-shifting robots, controlled by magnetic fields, can dynamically squash, bend or extend their entire bodies.

Scientists have built a simulator to test control algorithms for deformable supple robots through a series of challenging shape-changing tasks. Here, the configurable robot learns to extend and curve its supple body to avoid obstacles and reach its destination.

Photo: Courtesy of researchers

“Such a robot could control thousands of tiny pieces of muscle, so it’s very difficult to learn in a traditional way,” Chen says.

To solve this problem, he and his colleagues had to think about it differently. Instead of moving each small muscle individually, their reinforcement learning algorithm starts by learning to control groups of neighboring muscles that work together.

Then, once the algorithm has explored the space of possible actions, focusing on muscle groups, it drills down to more detailed information to optimize the policy or action plan it has learned. In this way, the control algorithm is based on a coarse-to-fine methodology.

“Rugged to fine means that when you take a random action, that random action will probably make a difference. The change in result is probably very significant because you are approximately controlling several muscles at once,” says Sitzmann.

To make this possible, researchers treat the robot’s operating space, i.e. the way it can move in a specific area, as an image.

Their machine learning model uses images of the robot’s environment to generate a 2D action space that includes the robot and the area around it. They simulate the robot’s movement using the so-called material point method, in which the action space is covered with points, such as image pixels, and superimposed on a mesh.

In the same way that nearby pixels in an image are related (like the pixels that make up a tree in a photo), they built an algorithm to understand that nearby action points have stronger correlations. The points around the robot’s “arms” will move similarly as it changes shape, and the points on the robot’s “leg” will also move similarly, but in a different way than the points on the robot’s “arm”.

Additionally, researchers use the same machine learning model to observe the environment and predict the actions the robot should take, which increases its efficiency.

Construction of the simulator

After developing this approach, researchers needed a way to test it, so they created a simulation environment called DittoGym.

DittoGym contains eight tasks that assess the ability of a reconfigurable robot to dynamically change shape. In one, the robot must elongate and curve its body to avoid obstacles to reach its destination. In another, it must change its shape to mimic the letters of the alphabet.

Animation of an orange blob changing into shapes such as a star and the letters
In this simulation, a reconfigurable supple robot, trained using the researchers’ control algorithm, must change its shape to mimic objects such as stars and the letters MIT.

Photo: Courtesy of researchers

“Our selection of tasks in DittoGym is consistent with both the general design principles of exemplary reinforcement learning and the specific needs of reconfigurable robots. “Each task is intended to represent certain properties that we consider important, such as the ability to navigate long-term exploration, the ability to analyze the environment and interact with external objects,” says Huang. “Together, we believe they can provide users with a comprehensive understanding of the flexibility of reconfigurable robots and the effectiveness of our reinforcement learning program.”

Their algorithm outperformed baseline methods and was the only technique suitable for performing multi-step tasks requiring several shape changes.

“We have a stronger correlation between action points that are closer together, and I think that’s the key to making this all work so well,” Chen says.

While it may be many years before shape-shifting robots are introduced in the real world, Chen and his colleagues hope their work will inspire other researchers not only to study supple robots that can be reconfigured, but also to think about using two-dimensional action spaces for solving other intricate control problems.

Latest Posts

More News