Anthropic’s Claude takes control of the robotic dog

Like most robots Start consulting in warehouses, offices, and people’s homes, the idea of large scale models entering complex systems sounds like the stuff of Sci-Fi nightmares. So, naturally, anthropic researchers were eager to see what would happen if Claude tried to control a robot – in this case, a robot dog.
In a new study, anthropic researchers found that Claude was able to do most of the work involved in programming the robot and got it to perform physical tasks. On one level, their findings demonstrate the advanced capabilities of today’s AI models. For one, they suggest how these systems can begin to expand into the physical realm as Master models add coding features and become better at communicating with software-physical objects.
“We suspect that the next step for AI species is to start reaching the world and affect the world more widely,” Logan Graham, a member of the anthropic red team, which studies models of potential risks, told the wire. “This will really require more robotic communication models.”
By anthropic grace
By anthropic grace
Anthropic was founded in 2021 by former employees who believed that AI could be a problem – even a development. Today’s models aren’t smart enough to fully control a robot, Graham says, but future models might be. He says that studying how people prevent LLMs in programming robots can help the industry to prepare for the idea that “the models will eventually do it one day.
What is not clear is why the AI model would decide to control the robot – let alone do something malicious with it. But considering the worst-case scenario is part of Anthropic’s brand, and it helps position the company as a key player in the responsible AI movement.
In an experiment, Project Fetch, anthropic asked two groups of researchers without robotic experience to take control of a robotic dog, the Unitree Go2 Quadruped, and plan to perform certain tasks. The groups were given access to the controller, asked to complete complex tasks. One group was using Claude’s coding model – the other was writing code without the help of AI. The group using Claude was able to complete some—if not all—tasks faster than the programming-only group. For example, he was able to get a robot to walk around and find a beach ball, something the human team couldn’t find.
Anthropic also studied the power of collaboration across teams by recording and analyzing their interactions. They found that this group without finding Claude showed bad feelings and confusion. This could be because Claude made it quick to connect to the robot and install the codes for ease of use.
By anthropic grace
The GO2 Robot used in the anthropic experiment costs $16,900—Cheap, by Robot Standards. It is often deployed in industries such as construction and construction for remote monitoring and security monitoring. A robot can move autonomously but usually relies on advanced software commands or a human operator. Go2 is made by Unitree, based in Hangzhou, China. Its AI systems are currently the most popular in the market, according to a recent semianalysis report.
The main motivational language models are chatgpt and other standard charnots that are used to generate text or images for quick response. Recently, these systems have become more adept at generating code and software that works – turning them into agents rather than just generators.


