Tuesday, March 10, 2026

Claude from Anthropic takes control of the robot dog

Share

Like more robots are starting to appear in warehouses, offices, and even homes, the idea of ​​hacking into sophisticated systems with vast language models sounds like a sci-fi nightmare. So naturally, Anthropic researchers wanted to see what would happen if Claude tried to take control of a robot – in this case, a robot dog.

In the fresh study, Anthropic researchers found that Claude was able to automate much of the work involved in programming the robot and getting it to perform physical tasks. On the one hand, their findings demonstrate the ability of state-of-the-art artificial intelligence models to perform agentic coding. On the other hand, they show how these systems can begin to expand into the physical realm as models master more aspects of coding and become better at interacting with software – as well as physical objects.

“We suspect that the next step for AI models will be to go out into the world and have a broader impact on the world,” Logan Graham, a member of Anthropic’s red team, which examines the models for potential risks, tells WIRED. “It’s really going to require more interaction between models and robots.”

Courtesy of Anthropic

Courtesy of Anthropic

Anthropic was founded in 2021 by former OpenAI employees who believed that artificial intelligence could become problematic, even hazardous, as it developed. Graham says today’s models aren’t clever enough to take full control of a robot, but future models could be. He says studying how people utilize LLM to program robots could lend a hand the industry prepare for the concept of “models that ultimately embody themselves,” referring to the idea that artificial intelligence could one day operate physical systems.

It’s still unclear why an AI model would decide to take control of a robot, let alone do something bad to it. But worst-case scenario speculation is part of Anthropic’s brand and helps position the company as a key player in the responsible AI movement.

In an experiment called Project Fetch, Anthropic asked two groups of researchers with no prior robotics experience to take control of a four-legged Unitree Go2 dog and program it to perform specific actions. Teams were given access to a controller and then asked to perform increasingly sophisticated tasks. One group used Claude’s coding model, and the other wrote code without the lend a hand of artificial intelligence. The group using Claude was able to complete some – but not all – tasks faster than the all-human programming group. For example, he managed to get a robot to walk around and find a beach ball, which an all-human group couldn’t understand.

Anthropic also examined the collaborative dynamics of both teams by recording and analyzing their interactions. They found that the group without access to Claude showed more negative feelings and confusion. This may be because Claude made the connection to the robot faster and coded an easier-to-use interface.

Courtesy of Anthropic

The Go2 robot used in Anthropic’s experiments costs $16,900 – relatively inexpensive by robot standards. It is typically used in industries such as construction and manufacturing to conduct remote inspections and security patrols. The robot can walk autonomously, but generally relies on commands from high-level software or a person operating the controller. Go2 is manufactured by Unitree, based in Hangzhou, China. Recent research shows that its artificial intelligence systems are currently the most popular on the market SemiAnalytics report.

Gigantic language models that support ChatGPT and other clever chatbots typically generate text or images in response to a prompt. More recently, these systems have become adept at generating code and operating software, transforming them into agents rather than just text generators.

Latest Posts

More News