Friday, April 11, 2025

Our latest advances in robot dexterity

Share

Tests

Published
Author’s

Robotics Team

Two modern AI systems, ALOHA Unleashed and DemoStart, lend a hand robots learn to perform complicated tasks requiring dexterous movements

People perform many tasks on a daily basis, such as tying shoelaces or tightening screws. But for robots to learn these highly dexterous tasks is incredibly challenging to perform. For robots to be more useful to people, they must become better at interacting with physical objects in lively environments.

Today we present two modern articles describing our latest advances in artificial intelligence (AI) in robot dexterity research: ALOHA unleashed which helps robots learn to perform complicated and modern manipulation tasks using two hands; and DemoStart which uses simulations to improve the real-world performance of a multi-fingered robot hand.

By helping robots learn from human demonstrations and turn images into actions, these systems are paving the way for robots that can perform a wide range of useful tasks.

Improving learning through imitation with two robot arms

Until now, most advanced AI robots were only able to pick up and place objects using a single arm. our new newspaperIntroducing ALOHA Unleashed, which achieves high levels of dexterity in bi-arm manipulation. With this modern method, our robot learned to tie a shoelace, hang a shirt, repair another robot, insert a gear, and even spotless the kitchen.

Example of a two-armed robot straightening shoelaces and tying them into a bow.

Example of a dual-arm robot laying out a polo shirt on a table, placing it on a clothes hanger, and then hanging it on the rack.

Example of a dual-arm robot repairing another robot.

The ALOHA Unleashed method is based on our Welcome 2 platform based on the original Hi (low-cost, open-source hardware system for two-handed teleoperation) from Stanford University.

ALOHA 2 is significantly more dexterous than previous systems because it has two hands that can be easily controlled remotely for training and data collection purposes, and it allows robots to learn how to perform modern tasks with less demonstrations.

We also improved the ergonomics of the robot hardware and enhanced the learning process in our latest system. First, we collected demonstration data by remotely controlling the robot’s behavior to perform challenging tasks such as tying shoelaces and hanging T-shirts. Then, we applied the diffusion method, predicting the robot’s actions based on random noise, similar to how our Imagen model generates images. This helps the robot learn from data, so it can perform the same tasks on its own.

Learning robot behavior from several simulated demonstrations

Controlling a dexterous, robotic hand is a complicated task that becomes even more complicated with each additional finger, joint, and sensor. In another new paperIntroducing DemoStart, which uses a reinforcement learning algorithm to lend a hand robots acquire dexterous behaviors in simulation. These learned behaviors are especially useful for complicated embodiments such as multi-fingered hands.

DemoStart first learns from straightforward states, and over time begins to learn from more challenging states until it masters the task as best it can. It requires 100x fewer simulated demonstrations to learn how to solve a task in a simulation than what is typically needed when learning from real-world examples for the same purpose.

The robot achieved a success rate of over 98% in a variety of simulation tasks, including reorienting cubes with a specific color shown, tightening a nut and bolt, and organizing tools. In real-world conditions, it achieved a success rate of 97% in reorienting and lifting cubes, and 64% in the task of inserting a plug into a socket, which required finger coordination and precision.

Example of a robot arm learning to correctly engage a yellow gear in simulation (left) and in real-world conditions (right).

Example of a robot arm learning to tighten a screw in simulation.

We have developed DemoStart with MuJoCoour open-source physics simulator. After mastering a number of tasks in simulation and using standard techniques to reduce the gap between simulation and reality, such as domain randomization, our approach was able to transfer nearly zero-shot to the physical world.

Learning robotics in simulation can reduce the cost and time required to conduct real-world, physical experiments. However, these simulations are challenging to design, and they do not always translate well to real-world performance. By combining reinforcement learning with learning from multiple demonstrations, DemoStart’s progressive learning automatically generates a curriculum that bridges the gap between simulation and reality, facilitating the transfer of knowledge from simulation to the physical robot and reducing the cost and time required to conduct physical experiments.

To enable more advanced robot learning through intensive experiments, we tested this modern approach on a three-fingered robot hand, called DEX-EEwhich was developed in cooperation with Shadow Robot.

Image of the dexterous DEX-EE robotic hand developed by Shadow Robot in collaboration with Google’s DeepMind robotics team (Source: Shadow Robot).

The Future of Robot Dexterity

Robotics is a unique field of AI research that shows how well our approaches work in the real world. For example, a immense language model could tell you how to tighten a screw or tie your shoes, but even if it were embodied in a robot, it wouldn’t be able to perform these tasks on its own.

AI robots will one day lend a hand humans perform all kinds of tasks at home, in the workplace, and beyond. Research into dexterity, including the productive and general learning approaches we’ve described today, will lend a hand make that future possible.

We still have a long way to go before robots can grasp and handle objects with the ease and precision of humans, but we are making significant progress, and each breakthrough innovation is another step in the right direction.

Latest Posts

More News