Tuesday, December 24, 2024

Brain-controlled robots

Share

For robots to do what we want them to do, they need to understand us. Too often, this means meeting them halfway: for example, teaching them the intricacies of human language or giving them explicit instructions for very specific tasks.

But what if we could develop robots that were a more natural extension of us and could actually do everything we thought about?

A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University is working on this problem by creating a feedback system that allows humans to instantly correct robot errors using only their brains.

Play the video

A feedback system developed at MIT allows operators to correct robot selections in real time using only brain signals.

Using data from an electroencephalography (EEG) monitor that records brain activity, the system can detect whether a person notices an error while the robot is performing an object sorting task. The team’s novel machine learning algorithms enable the system to classify brain waves in the range of 10 to 30 milliseconds.

While the system currently supports relatively plain binary selection tasks, the paper’s senior author says the work suggests that we may one day be able to control robots in a much more intuitive way.

“Imagine that you can immediately command a robot to perform a specific action, without having to type a command, press a button or even say a word,” says CSAIL director Daniela Rus. “Such a streamlined approach would improve our ability to oversee factory robots, autonomous cars, and other technologies we haven’t even invented yet.”

In the current study, the team used a humanoid robot named “Baxter” from Rethink Robotics, a company led by former CSAIL director and iRobot co-founder Rodney Brooks.

A paper presenting the work was written by BU graduate student Andres F. Salazar-Gomez, CSAIL graduate student Joseph DelPreto, and CSAIL scientist Stephanie Gil, under the supervision of Rus and BU professor Frank H. Guenther. The paper was recently accepted to the IEEE International Conference on Robotics and Automation (ICRA), which will be held in Singapore in May.

Intuitive human-robot interaction

Previous work on EEG-guided robotics involved training humans to “think” in specific ways that computers could recognize. For example, an operator may need to look at one of two glowing displays, each corresponding to a different task for the robot to perform.

The disadvantage of this method is that the training process and thought modulation can be taxing, especially for people supervising tasks in navigation or construction that require high concentration.

The Rus team wanted to make the experience more natural. To do this, they focused on brain signals called “error-related potentials” (ErrPs), which are generated when our brain notices an error. When the robot indicates what choice it plans to make, the system uses ErrPs to determine whether the human agrees with the decision.

“When you look at a robot, all you have to do is mentally agree or disagree with what it is doing,” Rus says. “You don’t have to learn to think a certain way – the machine adapts to you, not the other way around.”

ErrP signals are extremely frail, which means the system must be tuned enough to both classify the signal and incorporate it into the feedback loop for the operator. In addition to monitoring initial ErrP errors, the team also sought to detect “secondary errors,” which occur when the system fails to notice the initial human correction.

“If the robot is unsure of its decision, it can trigger a human response to get a more accurate answer,” says Gil. “These signals can dramatically improve accuracy by creating a continuous dialogue between human and robot in communicating their choices.”

While the system is not yet able to recognize secondary errors in real time, Gil expects the model will be able to improve to over 90% accuracy once it is able to.

Additionally, since ErrP signals have been shown to be proportional to how gross the robot’s error is, the team believes future systems could include more complicated multiple-choice tasks.

“This work brings us closer to developing effective tools for brain-controlled robots and prosthetics,” says Wolfram Burgard, a professor of computer science at the University of Freiburg, who was not involved in the research. “Given how difficult it can be to translate human language into a meaningful signal for robots, work in this area could have a truly profound impact on the future of human-robot collaboration.”

The project was funded in part by Boeing and the National Science Foundation.

Latest Posts

More News