It’s obvious to anyone with a pulse and a smartphone that the internet has an artificial intelligence problem. The problem has become more sedate since the launch of ChatGPT in 2022, with some social media platforms being flooded with AI-generated texts. There is data to support anecdotal evidence.
Novel pre-print study published today Researchers from Imperial College London, Stanford University and the Internet Archive found that about 35 percent of all novel websites are generated or powered by artificial intelligence. The same study also found that writing online is “increasingly sanitized and artificially comforting.” In other words, artificial intelligence makes the Internet counterfeit.
The research team tried four different approaches to AI detection before settling on tools from Pangram Labs that provided the most consistent results. (While the team said it performed well in testing, it’s worth noting that all AI detection tools are imperfect.) To compile a representative sample of websites, he used the Internet Archive’s Wayback Machine, which collects snapshots of websites. In addition to quantifying how many websites created between 2022 and 2025 rely on AI-generated writing, the study also tested six different theories about the characteristics of sloppiness.
The Artificial Happiness test looked at how AI affects the tone of writing online. Using sentiment analysis, which classifies words as positive, neutral and negative, it found that “the average positive sentiment score for AI-generated or AI-powered websites was 107 percent higher than for non-AI-powered websites.” Scientists see this enhance in artificial happiness as a “symptom” of the “sycophantic and overly optimistic nature of existing LLMs”. In this way, the tendency of AI writing tools to suck up to human users has the side effect of making the overall tone of online writing more saccharine.
Another test looked at whether an enhance in AI-generated text reduces the “range of unique ideas and diverse viewpoints” on offer. The researchers found that AI actually made the Internet less ideologically diverse, with AI-powered websites scoring about 33 percent higher on tests for “semantic similarity” than human-made websites.
While these two tests confirmed the researchers’ assumptions about artificial intelligence, the others did not. Four theories tested by researchers have not been confirmed. In particular, they suspected that AI would lead to an enhance in disinformation, but their analysis of the evidence did not support this hypothesis. They also guessed that the AI’s writing would not be linked to external sources and that it would be stylistically more general than human writing. Confusing expectations, none of these theories have been supported by evidence.
Although the analysis found that the ideas promoted in AI writings were more uniform – and, in particular, more consistently cheerful – the writings style it has not been confirmed that it is flattened itself. This came as a substantial surprise to researchers who expected they would see a clear move toward more general results. “Everyone on the team expected it to be true,” says Stanford researcher Maty Bohacek. “But we just don’t have significant evidence for that.”
Before conducting the analysis, the research team commissioned a survey on what people thought about artificial intelligence. Comparing this with the results, it turned out that it was not only the researchers who had raised expectations. Research has shown that many commonly held beliefs about AI writing are wrong.
Like the researchers, most respondents also assumed that as the number of AI-generated websites increases, they will encounter an enhance in counterfeit news. The huge majority of respondents also assumed that writing about artificial intelligence would no longer contain references to external sources and would have an increasingly general, uniform tone. “It’s interesting that people expected the worst results,” Bohacek says.
This study doesn’t have the final word on what AI is doing on the Internet. “We just wanted to make a breakthrough,” says Bohacek, who sees it as a starting point for deeper research. As a snapshot of the impact of the AI breakdown, it offers a particularly human insight: Sometimes it’s just difficult to predict how things will unfold.
