Thursday, May 8, 2025

The system allows robots to identify the properties of the object through service

Share

Human attic rubbish can often guess the contents of the box, simply lifting it and shaking it, without the need to see what is inside. Scientists from MIT, Amazon Robotics and the University of British Columbia taught work to do something similar.

They developed a technique that allows robots to exploit only internal sensors to learn about weight, softness or the content of the object, raising it and gently shaking. Thanks to its method, which does not require external measuring tools or cameras, the robot can accurately guess the parameters such as the mass of the object in a few seconds.

This low-cost technique can be particularly useful in applications where cameras can be less effective, such as sorting objects in a dim basement or settlement of debris inside a building that partly collapsed after the earthquake.

The key to their approach is the simulation process, which contains the models of the robot and the object to quickly identify the features of this object when the robot affects it.

The technique of scientists is as good in guessing the mass of the object as some more intricate and pricey methods covering a computer vision. In addition, their approach, which saving is solid enough to handle many types of hidden scenarios.

“This idea is general and I think that we simply scratch the surface of what the robot can learn in this way. My dream would be for robots to come out of the world, touch things and move things in their environment and come up with the properties of everything that they interact with themselves,” says Peter Yichen Chen, myth and main author of the author of the author in A paper with this technique.

His co -authors are a friend Mit Postdoc Chao Liu; Pingchuan has phd ’25; Jack Eastman Meng ’24; Dylan Randle and Yuri Ivanov from Amazon Robotics; Myth professors of electrical engineering and computer science Daniel Rus, who conducts Mit’s Information Science and Artificial Intelligence Laboratory (CSIL); and Wojciech Matusik, who runs a group of design and computing production at CSAIL. Research will be presented at the International Robotics and Automation Conference.

Detection signals

The method of scientists uses proprioception, which is the ability of a man or robot to sense his movement or position in space.

For example, a man who lifts dumbbells in the gym can feel the weight of this dumbbell in the wrist and biceps, even though they hold the dumbbells in their hands. In the same way, the robot can “feel” the severity of the object by many joints in the shoulder.

“Man does not have very accurate measurements of common angles in our fingers or the precise amount of torque that we use for the object, but the robot does it. We use these skills,” says Liu.

When the robot raises the object, the system of scientists accumulates signals from the arthrod encoders of the robot, which are sensors that detect the rotational position and joint speed during movement.

Most robots have common encoders in engines that drive moving parts, add Liu. This makes their technique more profitable than some approaches, because it does not require additional components, such as touch sensors or vision tracking systems.

To estimate the properties of the object during the interaction of the Robot-Obiekt, their system is based on two models: one that simulates the robot and its movement, and one that simulates the dynamics of the object.

“Having an accurate digital twin of the real world is really important for the success of our method,” adds Chen.

Their algorithm “observes” the robot and the object moves during physical interaction and uses the detailed encoder for reverse work and identify the properties of the object.

For example, a heavier object moves slower than airy if the robot uses the same strength.

Differential simulations

They exploit a technique called differential simulation, which allows the algorithm to predict how petite changes in the property properties, such as mass or softness, affect the final position of the robot connector. Scientists have built their simulations using the Warp Nvidia library, an open source tool that supports differential simulations.

When the differential simulation matches the real movement of the robot, the system identified the correct property. The algorithm can do it in a few seconds and only needs to see one real robot trajectory to make calculations.

“Technically, as long as you know the object model and the way the robot can apply strength for this object, you should be able to learn the parameter you want to identify,” says Liu.

Scientists have used their method to learn the mass and softness of the object, but their technique can also determine properties such as the moment of inertia or the viscosity of the fluid inside the container.

In addition, because their algorithm does not need an extensive set of data for training, such as some methods that are based on a computer vision or external sensors, would not be so susceptible to failure in the face of hidden environments or up-to-date objects.

In the future, scientists want to try to combine their method with a computer vision to create a multimodal detection technique that is even stronger.

“This work does not try to replace a computer vision. Both methods have their advantages and disadvantages. But here we showed that without a camera we can come up with some of these properties,” says Chen.

They also want to discover applications with more intricate robotic systems, such as gentle robots and more intricate objects, including fluids or granular media, such as sand.

In the long run, they hope to exploit this technique to improve the learning of robots, enabling future robots to quickly develop up-to-date manipulation skills and adapt to changes in their environment.

“The determination of the physical properties of objects from data has long been a challenge in robotics, especially when only limited or noisy measurements are available. This work is significant because it shows that robots can accurately deduce properties such as mass and softness, using only their internal joint sensors, without relying on external cameras or specialized measuring tools,” says Miles Macklin, senior director, senior director the director at Simulidius, he was not involved in this subject.

These works are partly financed by Amazon and the GIST-CSAIL research program.

Latest Posts

More News