Pick-and-place machines are a type of automated equipment used to place objects in systematic, organized locations. These machines are used in a variety of applications—from electronics assembly to packaging, retrieval from containers, and even inspection—but many current pick-and-place solutions are restricted. Current solutions lack “fine generalization,” or the ability to solve multiple tasks without sacrificing accuracy.
“In industry you often see that [manufacturers] they end up with very customized solutions to the specific problem they have, so a lot of engineering and not as much flexibility in terms of the solution,” Maria Bauza Villalonga PhD ’22, a senior research scientist at Google DeepMind, where she works on robotics and robotic manipulation. “SimPLE solves that problem and provides a pick-and-place solution that is flexible and still provides the precision that you need.”
New paper by MechE researchers published in the journal explores pick-and-place solutions with greater precision. In precision pick-and-place, also known as kitting, a robot transforms an unstructured arrangement of objects into an orderly arrangement. The approach, called SimPLE (Simulation to Pick Localize and placE), learns to pick up, re-grasp, and place objects using a computer-aided design (CAD) model of the object, all without prior experience or encounters with specific objects.
“SimPLE has the advantage that we can solve many different tasks with the same hardware and software, using simulation to learn models that adapt to each specific task,” he says. Alberto RodriguezMIT visiting scientist, former MechE faculty member, and currently associate director of manipulation research at Boston Dynamics. SimPLE was developed by members of the Manipulation and Mechanisms Lab at MIT (MCube) under Rodriguez’s leadership.
“In this work, we show that it is possible to achieve the level of positioning accuracy required in many industrial pick and place tasks without any other specialization,” Rodriguez says.
Precision pick and place: MIT PhD candidate Antonia Delores Bronars SM ’22 describes the novel SimPLE (Simulation of Pick and Place Localization and placE) system.
Video: John Freidah/MIT Department of Mechanical Engineering
SimPLE, a dual-arm robot equipped with visual-tactile sensors, uses three main components: task-aware grasping, visual and tactile perception (visual-tactile perception), and re-grasp planning. Real observations are compared with a set of simulated observations via supervised learning so that a distribution of probable object positions can be estimated and their placement can be performed.
In experiments, SimPLE successfully demonstrated the ability to select and place a variety of objects with a wide range of shapes, achieving correct placement in over 90 percent of the cases for 6 objects and in over 80 percent of the cases for 11 objects.
“There is an intuitive understanding in the robotics community that both vision and touch are useful, but [until now] “There haven’t been many systematic demonstrations of how it can be useful for complex robotics tasks,” says mechanical engineering doctoral student Antonia Delores Bronars SM ’22. Bronars, who is currently working with Pulkit Agrawal, assistant professor in the Department of Electrical Engineering and Computer Science (EECS), is continuing her doctoral work by investigating incorporating haptic capabilities into robotic systems.
“Most work on grasping ignores downstream tasks,” says Matt Mason, a principal investigator at Berkshire Grey and a professor emeritus at Carnegie Mellon University, who was not involved in the work. “This paper goes beyond the desire to imitate humans and shows from a strictly functional perspective the utility of combining touch, vision, and two hands.”
Ken Goldberg, the William S. Floyd Jr. Distinguished Chair in Engineering at the University of California at Berkeley, who was also not involved in the study, says the robot manipulation methodology described in the paper offers a valuable alternative to the trend toward artificial intelligence and machine learning methods.
“The authors combine well-established geometric algorithms that can reliably achieve high precision for a given set of object shapes and demonstrate that this combination can significantly improve performance over AI methods,” says Goldberg, who is also co-founder and chief scientist of Ambi Robotics and Jacobi Robotics. “This could be immediately useful in industry and is a great example of what I call ‘good old-fashioned engineering’ (GOFE).”
Bauza and Bronars say their work is the result of collaboration between several generations.
“To really show how vision and touch can be useful together requires building a complete robotic system, which is something that is very difficult for one person to do in a short period of time,” Bronars says. “Collaboration, with each other and with Nikhil [Chavan-Dafle PhD ‘20] and Yifan [Hou PhD ’21 CMU]and over many generations and laboratories we have managed to build a comprehensive system.”