Imagine you’re grabbing a massive object, like a wrench, with one hand. You’re likely to grab the wrench with your entire fingers, not just your fingertips. Sensory receptors in the skin that run the length of each finger send information to your brain about the tool you’re grabbing.
In a robot hand, the touch sensors that exploit cameras to obtain information about objects being grasped are tiny and flat, so they are often located in the fingertips. These robots, on the other hand, exploit only their fingertips to grasp objects, usually with a pinching motion. This limits the manipulative tasks they can perform.
MIT researchers have developed a camera-based touch sensor that is long, curved, and shaped like a human finger. Their device provides high-resolution touch over a immense area. The sensor, called GelSight Svelte, uses two mirrors to reflect and refract lightweight so that a single camera, placed at the base of the sensor, can see the entire length of the finger.
In addition, the researchers built a finger-shaped sensor with a versatile spine. By measuring how the spine bends when the finger touches an object, they can estimate the force acting on the sensor.
GelSight Svelte sensors were used in the production robot hand which was able to grasp a massive object like a human, using the full sensory area of all three fingers. The hand could also perform the same pincer grips common in customary robotic grippers.
“Because our new sensor is shaped like a human finger, we can use it to perform different types of grips for different tasks, instead of using pincer grips for everything. The parallel gripper has limited capabilities. Our sensor really opens up new possibilities for different manipulation tasks that we could perform with robots,” says Alan (Jialiang) Zhao, a graduate student in mechanical engineering and lead author paper on GelSight Svelte.
Zhao wrote the paper with senior author Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the IEEE Conference on Wise Robots and Systems.
Mirror, mirror
Cameras used in touch sensors are narrow by size, focal length of lenses, and viewing angles. Therefore, these touch sensors are usually tiny and flat, which limits them to the robot’s fingertips.
With a longer sensing area that more closely resembles a human finger, the camera would need to be farther from the sensing surface to see the entire area. This is especially complex due to the size and shape constraints of the robot’s gripper.
Zhao and Adelson solved this problem by using two mirrors that reflect and refract lightweight toward a single camera placed at the base of the finger.
GelSight Svelte contains one flat, angled mirror that sits opposite the camera and one long, curved mirror that sits along the back of the sensor. These mirrors redistribute the lightweight rays from the camera so that the camera can see them along the entire length of the finger.
To optimize the shape, angle and curvature of the mirrors, scientists designed software that simulates the reflection and refraction of lightweight.
“With this software, we can easily change the position of the mirrors and how they are curved to get an idea of what the image will look like once the sensor is made,” Zhao explains.
Mirrors, a camera, and two sets of LEDs for lighting are attached to a plastic spine and encased in a versatile skin made of silicone gel. The camera sees the back of the skin from the inside; based on the deformation, it can see where contact occurs and measure the geometry of the object’s contact surface.
In addition, red and green LEDs allow you to see how deep the gel is pressed when gripping an object, which is caused by the color saturation at different locations on the sensor.
Scientists can exploit color saturation information to reconstruct a three-dimensional depth image of the object they are capturing.
The sensor’s plastic spine allows it to determine proprioceptive information, such as torques applied to a finger. The spine bends and flexes when an object is grasped. The researchers exploit machine learning to estimate how much force is being applied to the sensor based on these spinal deformations.
But combining these components into a working sensor was no straightforward task, Zhao says.
“Making sure you have the right curvature so that the mirror matches what we have in the simulation is quite difficult. Also, I realized that there are certain types of superglue that inhibit the curing of the silicon. It took a lot of experimentation to create a sensor that actually works,” he adds.
Versatile grip
After refining the design, the researchers tested GelSight Svelte by pressing different objects, such as a screw, against different spots on the sensor to test image clarity and see how well the sensor recognized the object’s shape.
They also used three sensors to build the GelSight Svelte hand, which can perform multiple grips, including a pincer grip, a lateral pincer grip, and a power grip that uses the entire sensing area of three fingers. Most robotic hands, which are shaped like parallel jaw drippers, can only perform pincer grips.
The three-finger grip allows the robotic hand to hold a heavier object more stably. However, pincer grips are still useful when the object is very tiny. The ability to perform both types of grips with one hand would give the robot greater versatility, he says.
Scientists plan to improve GelSight Svelte so that the sensor is movable and can bend at the joints, more like a human finger.
“Optical-tactile finger sensors allow robots to use inexpensive cameras to collect high-resolution images of surface contact, and by observing the deformation of the flexible surface, the robot estimates the contact shape and applied forces. This work advances the GelSight finger design, with improvements in full finger coverage and the ability to approximate bending moments using image differences and machine learning,” says Monroe Kennedy III, an assistant professor of mechanical engineering at Stanford University, who was not involved in the research. “Improving robotic tactile sense to approach human capabilities is a necessity and perhaps a catalyst for developing robots capable of performing complex, dexterous tasks.”
This research is supported in part by the Toyota Research Institute.