Monday, December 23, 2024

AI assistant monitors teamwork to promote effective collaboration

Share

During a 2018 research cruise around Hawaii, Yuening Zhang SM ’19, PhD ’24 learned how hard it is to keep a ship on track. The careful coordination required to map the underwater terrain could sometimes create a stressful environment for team members, who might have differing understandings of which tasks to perform in spontaneously changing conditions. During those trips, Zhang wondered how a robotic companion could aid her and her crewmates achieve their goals more effectively.

Six years later, as a research assistant at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Zhang developed what could be considered the missing piece: an AI assistant that communicates with team members to align roles and achieve a common goal. In a paper presented at the International Conference on Robotics and Automation (ICRA) and published on IEEE Xplore on August 8she and her colleagues demonstrate a system that can supervise a team of human and artificial intelligence agents, intervening when necessary to potentially boost teamwork efficiency in areas such as search-and-rescue missions, medical procedures, and strategy games.

The CSAIL-led group has developed a theory-of-mind model for AI agents that represents how humans think about and understand each other’s possible plans when collaborating on a task. By observing the actions of other agents, this novel team coordinator can infer their plans and their understanding of each other based on a prior set of beliefs. When their plans are incompatible, the AI ​​helper intervenes, adjusting their beliefs about themselves, instructing their actions, and asking questions when necessary.

For example, when a team of rescuers is in the field to make an initial triage of victims, they must make decisions based on their beliefs about the roles and progress of others. This type of epistemic planning can be improved with CSAIL software, which can send messages about what each agent is going to do or has done to ensure the task is completed and avoid duplication of effort. In such a case, an AI helper can intervene to communicate that an agent has already gone to a certain room or that no agents are covering a certain area with potential victims.

“Our work takes into account the mindset of, ‘I believe you believe what someone else believes,’” says Zhang, who is now a research scientist at Mobi Systems. “Imagine you’re working on a team and you ask yourself, ‘What exactly is this person doing? What am I going to do? Does this person know what I’m going to do?’ We model how different team members understand the overall plan and communicate what they need to accomplish to help realize the team’s overall goal.”

Artificial Intelligence to the Rescue

Even with a sophisticated plan, both human and robotic agents will encounter confusion and even make mistakes if their roles are not clear. This plight is especially acute in search-and-rescue missions, where the goal may be to locate a person in distress despite constrained time and a huge area to scan. Fortunately, communications technology augmented by a novel robot assistant could potentially notify search teams of what each team is doing and where they are searching. In turn, agents could move more efficiently around their territory.

This type of task organization can aid in other high-stakes scenarios, such as surgeries. In these cases, a nurse must first wheel a patient into the operating room, and then an anesthesiologist must put the patient to sleep before the surgeons can begin the operation. During the operation, the team must constantly monitor the patient’s condition, dynamically responding to the actions of each collaborator. To ensure that each step in the procedure remains well-organized, an AI team coordinator can oversee and intervene if there is confusion about any of these tasks.

Effective teamwork is also an integral part of video games like “Valorant,” where players work together to coordinate who must attack and defend against another team online. In such scenarios, an AI assistant can appear on screen to warn individual users where they have misinterpreted the tasks they need to complete.

Before she led the development of the model, Zhang designed EPike, a computational model that can act as a team member. In a 3D simulation program, the algorithm controlled a robotic agent that had to match a container to a human’s chosen beverage. Rational and sophisticated as they may be, there are times when these AI-simulated bots are constrained by their misconceptions about their human partners or the task at hand. The novel AI coordinator can correct the agents’ beliefs when necessary to solve potential problems, and it consistently intervened in these cases. The system sent messages to the robot about the human’s true intentions to ensure it matched the container correctly.

“In our work on human-robot collaboration, we have been both humbled and inspired over the years by how fluid human partners can be,” says Brian C. Williams, a professor of aeronautics and astronautics at MIT, a member of CSAIL, and senior author of the study. “Just look at a young couple with children who work together to make breakfast and take the kids to school. If one parent sees their partner serving breakfast while still in a bathrobe, the parent knows to quickly shower and get the kids to school without saying a word. Good partners are well aligned with their beliefs and goals, and our work on epistemic planning aims to capture this style of reasoning.”

The researchers’ method involves probabilistic reasoning with recursive mental modeling of agents, which allows the AI ​​assistant to make risk-constrained decisions. They also focused on modeling the agents’ understanding of plans and actions, which could complement previous work on modeling beliefs about the current world or environment. The AI ​​assistant currently infers the agents’ beliefs based on a given prior set of possible beliefs, but the MIT group envisions using machine learning techniques to generate novel hypotheses on the fly. To apply this analogy to real-life tasks, they also aim to consider richer representations of plans in their work and further reduce the computational cost.

Lively Object Language Labs President Paul Robertson, Johns Hopkins University assistant professor Tianmin Shu, and former CSAIL collaborator Sungkweon Hong PhD ’23 join Zhang and Williams in the study. Their work was supported in part by the U.S. Defense Advanced Research Projects Agency (DARPA) Artificial Social Intelligence for Successful Teams (ASIST) program.

Latest Posts

More News