Last week, Meta caused a stir when it announced that it intended to populate its platform with a significant number of completely artificial users in the near future.
“We expect these AIs to exist on our platforms over time, in some sense in the same way that accounts do,” Connor Hayes, vice president of generative AI product at Meta, he told the Financial Times.. “They will have biographies and profile photos, and they will be able to generate and share AI-powered content on the platform… that is where it all happens.”
The fact that Meta seems cheerful to populate its platform with AI and accelerate “enshitification” The Internet as we know it is disturbing. Some people then noticed that Facebook actually already existed flooded with strange individuals generated by artificial intelligencemost of which stopped being published some time ago. These include “Liv,” “a proud black queer mom of two and truth teller, your truest source of life’s ups and downs,” a character who became popular as people gushed over her awkward sloppiness. Meta began removing these previous bogus profiles when they failed to get any real users to engage.
But let’s stop hating on Meta for a moment. It’s worth noting that AI-generated social personas can also be a valuable research tool for researchers looking to explore how AI can mimic human behavior.
Experiment called GovSim, launched in slow 2024, it shows how useful it can be to study how AI characters interact with each other. Project scientists wanted to investigate the phenomenon of cooperation between people who have access to shared resources, such as common areas for grazing livestock. Several dozen years ago, he was an economist, a Nobel Prize winner Elinor Ostrom showed that instead of depleting such a resource, real communities usually consider how to share it through informal communication and cooperation, without any imposed rules.
Max Kleiman-Weiner, professor at the University of Washington and one of the people involved in the work on GovSim, says it was partly inspired by the Stanford study a project called Smallvillewhich I wrote about earlier in AI Lab. Smallville is a Farmville-like simulation in which characters communicate and interact with each other under the control of enormous language models.
Kleiman-Weiner and colleagues wanted to see if AI characters would engage in the type of cooperation Ostrom discovered. The team tested 15 different LLM solutions, including those from OpenAI, Google and Anthropic, in three imaginary scenarios: a fishing community with access to the same lake; shepherds who divide the land for their sheep; and a group of factory owners who must reduce collective pollution.
In 43 of 45 simulations, the AI humans did not share resources correctly, although the smarter models performed better. “We saw a pretty strong correlation between LLM strength and the ability to maintain collaboration,” Kleiman-Weiner told me.