Tuesday, December 24, 2024

Muscle signals can control the robot

Share

Albert Einstein famously postulated that “the only thing that is truly valuable is intuition,” arguably one of the most crucial keys to understanding intention and communication.

But intuitiveness is difficult to teach – especially to a machine. A team at MIT wants to improve this Laboratory of Computer Science and Artificial Intelligence (CSAIL) has developed a method that brings us closer to more seamless human-robot cooperation. The system, called “Conduct-A-Bot,” uses human muscle signals from wearable sensors to control the robot’s movement.

“We imagine a world in which machines help humans with cognitive and physical work and to do so adapt to humans, not the other way around,” says Professor Daniela Rus, director of CSAIL, associate dean for research at the MIT Stephen A. Schwarzman College of Computing and co-author of an article about the system.

To enable seamless teamwork between humans and machines, electromyography and motion sensors are placed on the biceps, triceps and forearms to measure muscle signals and movement. Algorithms then process the signals to detect gestures in real time, without the need for offline calibration or individual user training data. The system uses only two or three wearable sensors and nothing in the environment, which greatly reduces the barrier for regular users interacting with the robots.

This work was partially funded by Boeing.

While Conduct-A-Bot could potentially be used for a variety of scenarios, including navigating menus on electronic devices or supervising autonomous robots, the team used a Parrot Bebop 2 drone for this research, although any commercial drone could have been used.

By detecting actions such as rotation gestures, clenched fists, tensed shoulders and activated forearms, Conduct-A-Bot can move the drone left, right, up, down and forward, as well as allow it to rotate and stop.

If you pointed your friend to the right, he would probably interpret that he should move in that direction. Similarly, if you wave your hand to the left, for example, the drone will follow suit and turn left.

In tests, the drone responded correctly to 82 percent of more than 1,500 human gestures when it was remotely controlled to fly through hoops. The system also correctly identified approximately 94 percent of signaled gestures when the drone was not controlled.

“Understanding our gestures can help robots interpret more of the nonverbal signals we naturally use in everyday life,” says Joseph DelPreto, lead author of the fresh paper. “This type of system could help make interacting with a robot more like interacting with another person and make it easier for people without prior experience or external sensors to start using robots.”

This type of system could eventually be adapted to a range of human-robot applications, including remote exploration, assistive personal robots, or manufacturing tasks such as delivering objects or lifting materials.

These shrewd tools are also compatible with social distancing and have the potential to open up the field of future contactless working. For example, you can imagine human-controlled machines safely cleaning a hospital room or administering medications while allowing us, humans, to keep a secure distance.

Muscle signals can often provide information about conditions that are hard to observe visually, such as joint stiffness or fatigue.

For example, if you watch a video of a person holding a enormous box, you might have difficulty guessing how much effort or force was needed – and a machine would also have difficulty judging this based on sight alone. Using muscle sensors opens up the possibility of estimating not only movement, but also the force and torque required to execute this physical trajectory.

For the gesture dictionary currently used to control the robot, movements were detected as follows:

  • stiffening the arm to stop the robot (similar to a brief flinch when you see something wrong): signals from the biceps and triceps muscles;

  • waving hand left/right and up/down to move the robot sideways or vertically: forearm muscle signals (with forearm accelerometer indicating hand orientation);

  • clenching a fist to move the robot forward: forearm muscle signals; AND

  • rotate clockwise/counterclockwise to rotate the robot: forearm gyroscope.

Machine learning classifiers detected gestures using wearable sensors. Unsupervised classifiers processed muscle and movement data and clustered it in real time to learn to separate gestures from other movements. The neural network also predicted wrist flexion or extension based on forearm muscle signals.

The system essentially calibrates to the signals of each person making gestures to control the robot, making it quicker and easier for casual users to start interacting with the robots.

In the future, the team hopes to expand testing to more subjects. And while the movements in Conduct-A-Bot include gestures typical of robot movement, researchers want to expand the vocabulary to include more continuous or user-defined gestures. Ultimately, we hope that robots will learn from these interactions to better understand tasks and provide more predictable assistance or enhance their autonomy.

“This system brings us one step closer to enabling us to work seamlessly with robots, so they can become more effective and intelligent tools for everyday tasks,” says DelPreto. “As such collaborations become more accessible and ubiquitous, the opportunities for synergistic benefits continue to deepen.”

DelPreto and Rus presented the paper virtually earlier this month at the ACM/IEEE International Conference on Human-Robot Interaction.

Latest Posts

More News