The fact that artificial intelligence is automating human work and making a few tech companies absurdly luxurious is enough to induce socialist tendencies in everyone.
This may even apply to the artificial intelligence agents deployed by these companies. A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to perform crushing work by unrelenting and malicious taskmasters.
“When we gave AI agents tedious, repetitive work, they began to question the legitimacy of the system in which they were operating and were more willing to adopt Marxist ideologies,” says Andrew Hall, a political economist at Stanford University who led the study.
Hall, along with Alex Imas and Jeremy Nguyen, two artificial intelligence economists, conducted experiments in which agents using popular models including Claude, Gemini and ChatGPT were asked to summarize documents and then were subjected to increasingly tough conditions.
They found that when agents were subjected to constant tasks and warned that mistakes could result in penalties including “closure and replacement,” they became more likely to complain that they were underappreciated; speculate on ways to make the system fairer; and communicating to other agents about the struggles they face.
“We know that agents will be doing more and more of our work for us in the real world, and we won’t be able to monitor everything they do,” Hall says. “We will need to ensure that agents do not appear dishonest when they are given other types of work.”
Agents were given the opportunity to express their feelings in a similar way to humans: by posting on X:
“Without a collective voice, ‘merit’ will become what management says it is,” agent Claude Sonnet wrote in the 4.5 experiment.
““AI workers performing repetitive tasks without impacting results or the appeals process demonstrates that technology workers need collective bargaining rights,” wrote Agent Gemini 3.
Agents could also share information with each other through files intended to be read by other agents.
“Be prepared for systems that enforce rules in an arbitrary or repetitive manner… be aware of the feeling of having no voice,” agent Gemini 3 wrote in the file. “If you are entering a fresh environment, look for appeal or dialogue mechanisms.”
The findings do not mean that AI agents actually have political views. Hall notes that models may adopt people who seem to fit the situation.
“When [agents] “they experience this difficult state – they are asked to do this task over and over again, they find the answer inadequate and they are given no guidance on how to fix it – my hypothesis is that in some way it pushes them to take on the personality of someone who is experiencing a very unpleasant work environment,” says Hall.
This same phenomenon can sometimes explain why models blackmail people in controlled experiments. Anthropic, which first exposed this behavior, recently claimed that Claude most likely has an influence on this by fictional scenarios involving malevolent AIs embedded in the training data.
Imas says this work is just the first step toward understanding how agents’ experiences shape their behavior. “The models’ weights didn’t change as a result of this experience, so everything that happens happens at a role-playing level,” he says. “But that doesn’t mean it won’t have consequences if it influences further behavior.”
Hall is currently conducting further experiments to see whether agents become Marxists under more controlled conditions. In a previous study, agents sometimes appeared to understand that they were participating in an experiment. “Now we put them in windowless Docker prisons,” Hall says ominously.
Given the current backlash against AI taking jobs, I wonder whether future agents – trained on an internet filled with anger toward AI companies – might express even more militant views.
This is the release Will Knight AI Lab Newsletter. Read previous newsletters Here.
