Wednesday, December 25, 2024

Gives a pliable robotic feeling

Share

One of the hottest topics in robotics is the field of pliable robots, which apply pliable and elastic materials instead of customary unyielding materials. However, the apply of pliable robots was narrow due to the lack of good intuition. A good grasping robot must sense what it is touching (touch detection) and must sense the position of its fingers (proprioception). Most pliable robots lacked such sensing.

In a novel pair of articles, scientists from MIT Laboratory of Computer Science and Artificial Intelligence (CSAIL) has developed novel tools that support robots better perceive what they are interacting with: the ability to see and classify objects and a softer, more gentle touch.

“We want to make it possible to see the world by feeling the world. The robots’ soft hands have sensory skin that allows them to pick up a variety of objects, from delicate ones like potato chips to heavy ones like milk bottles,” says CSAIL director Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering in Computer Science and associate dean for research at the MIT Stephen A. Schwarzman College of Computing.

This work was supported by the Toyota Research Institute.

One paper builds on last year’s research conducted at MIT and Harvard University, where the team developed a pliable and mighty robotic gripper in the form of a cone-shaped origami structure. It sinks into Venus flytrap-like objects to pick up objects that weigh up to 100 times its own weight.

To bring this newfound versatility and adaptability even closer to the human hand, the novel team proposed a sensible addition: touch sensors made of latex “bubbles” (balloons) connected to pressure transducers. The novel sensors allow the gripper to not only pick up objects as dainty as potato chips, but also classify them, allowing the robot to better understand what it is picking up while still demonstrating a gentle touch.

When classifying objects, the sensors correctly identified 10 objects with over 90 percent accuracy, even when the object slipped from its grip.

“Unlike many other soft touch sensors, ours can be quickly manufactured, mounted in grippers, and demonstrated sensitivity and reliability,” says MIT postdoc Josie Hughes, lead author of the novel paper on the sensors. “We hope they will provide a new soft sensing method that can be applied to a wide range of different applications in manufacturing settings, such as packaging and lifting.”

IN second papera group of researchers have created a pliable robotic finger called “GelFlex” that uses built-in cameras and deep learning to enable high-resolution touch sensation and “proprioception” (awareness of body position and movement).

The gripper, which looks similar to the two-finger cup gripper you might see at a soda station, uses a tendon-powered mechanism to move the fingers. When tested on metal objects of various shapes, the system had over 96% recognition accuracy.

“Our soft finger can provide high proprioception accuracy and accurately predict grasped objects, and can withstand significant impacts without harming the environment it interacts with or itself,” says Yu She, lead author of the novel paper on GelFlex. “By keeping fingers soft with a flexible exoskeleton and taking high-resolution measurements with built-in cameras, we open up a wide range of possibilities for soft manipulators.”

Magic ball senses

The magic ball gripper is made of a pliable origami structure, surrounded by a pliable balloon. When negative pressure is applied to the balloon, the origami structure closes around the object and the gripper deforms to its structure.

While this movement allows the gripper to grasp a much wider range of objects than ever before, such as soup cans, hammers, wine glasses, drones, and even a single broccoli flower, the greater intricacies of delicacy and understanding were still out of reach – until they added sensors.

When a force or stress is applied to the sensors, the internal pressure changes, and the team can measure this change in pressure to determine when it will feel it again.

In addition to the latex sensor, the team also developed a feedback algorithm to give the gripper a human-like duality of strength and precision – thanks to which 80 percent of the tested objects were grasped without damage.

The team tested the gripper sensors on a variety of household items, from bulky bottles to miniature, frail items, including cans, apples, a toothbrush, a water bottle and a bag of cookies.

The team hopes to make the methodology scalable in the future, using computational design and reconstruction methods to improve resolution and range with novel sensor technology. Ultimately, they envision using the novel sensors to create fluid-sensing skin that demonstrates scalability and sensitivity.

Hughes co-authored a novel paper with Rus, which they will present virtually at the 2020 International Conference on Robotics and Automation.

GelFlex

In the second paper, the CSAIL team looked at the possibility of giving a pliable robotic gripper more diverse, human-like senses. Gentle fingers allow for a wide range of deformations, but prosperous tactile and proprioceptive sensation is required for them to be used in a controlled manner. The team used built-in cameras with wide-angle fisheye lenses that capture finger deformations in detail.

To create GelFlex, the team used a silicone material to create a pliable and crystal clear finger, then placed one camera near the tip of the finger and another in the middle of the finger. They then painted reflective ink on the front and side surfaces of the finger and added LED lights on the back. This allows the internal fisheye camera to observe the condition of the front and side surfaces of the finger.

The team trained neural networks to extract key information from indoor cameras for feedback. One neural network was trained to predict the bending angle of the GelFlex and the other to estimate the shape and size of grasped objects. The gripper can then pick up various objects such as a Rubik’s Cube, a DVD case or a block of aluminum.

During testing, the average position error during grasping was just under 0.77 millimeters, which is better than a human finger. In the second series of tests, the gripper was tasked with grasping and recognizing cylinders and boxes of various sizes. Of the 80 trials, only three were misclassified.

In the future, the team hopes to improve proprioception and touch detection algorithms and apply vision sensors to estimate more convoluted finger configurations, such as twisting or lateral bending, that are challenging for regular sensors but should be achievable with built-in cameras.

Yu She co-authored the GelFlex paper with MIT graduate student Sandra Q. Liu, Tsinghua University’s Peiyu Yu, and MIT professor Edward Adelson. They will present the paper virtually at the 2020 International Conference on Robotics and Automation.

Latest Posts

More News