Wednesday, March 11, 2026

AI models also suffer from brain rot

Share

After all, AI models can be somewhat like humans.

Modern test from the University of Texas at Austin, Texas A&M and Purdue University show that gigantic language models fed a diet of popular but low-quality social media content experience a kind of “brain rot” that may be familiar to anyone who has spent too much time doomscrolling on X or TikTok.

“We live in an era where information grows faster than attention spans, and much of it is designed to attract clicks rather than convey truth and depth,” says Junyuan Hong, a recent assistant professor at the National University of Singapore who worked on the study as a graduate student at UT Austin. “We wondered: What happens when AI is trained to do the same things?”

Hong and his colleagues introduced different types of text into two gigantic open-source language models as part of their initial training. They investigated what happens when models receive a mix of highly “engaging” or widely shared social media posts and posts containing sensational or intrusive text such as “wow,” “look” or “only today.”

The researchers then used several different benchmarks to assess the impact of this “junk” social media diet on two open-source models: Meta’s Lama and Alibaba’s Qwen.

Models fed junk text experienced a form of AI-induced brain rot – with cognitive decline including reduced reasoning ability and memory degradation. By two measures, these models also became less ethically adjusted and more psychopathic.

The results reflect human studies that can be seen that low-quality online content has harmful effect on people’s cognitive abilities. The prevalence of this phenomenon has led to its name “brain rot” in the Oxford Dictionary word of the year 2024.

The results are essential for the AI ​​industry, Hong says, because modelers may assume that social media posts are a good source of training data for their models. “Training around viral or attention-grabbing content can look like scaling up data,” he says. “But it may quietly undermine reasoning, ethics and long-term attention.”

The fact that LLMs are suffering from brain rot seems particularly worrying when artificial intelligence itself increasingly generates social media content, much of which is seemingly optimized for engagement. Researchers also found that models damaged by low-quality content cannot be easily improved through retraining.

The findings also suggest that AI systems built around social media platforms like Grok could suffer from quality control issues if user-generated posts are used in training without attention to their integrity.

“As more and more AI-generated errors spread on social media, it contaminates the data that future models will learn from,” says Hong. “Our findings show that once this type of brain rot sets in, subsequent clean training cannot completely reverse it.”


This is the release Will Knight AI Lab Newsletter. Read previous newsletters Here.

Latest Posts

More News