Thursday, May 7, 2026

Research shows that using AI for just 10 minutes can make you idle and stupid

Share

Using AI chatbots According to one of the study’s authors, even just 10 minutes can have a shockingly negative impact on people’s ability to think and solve problems. new study from researchers at Carnegie Mellon, MIT, Oxford and UCLA.

Researchers had people solve a variety of problems, including plain fractions and reading comprehension, through an online platform that paid them for their work. They conducted three experiments, each involving several hundred people. Some participants gained access to an AI assistant capable of solving the problem on their own. When the AI ​​helper was suddenly taken away, they were much more likely to give up on the problem or answer incorrectly. The study suggests that widespread operate of artificial intelligence may enhance productivity at the expense of developing basic problem-solving skills.

“It’s not that we should ban artificial intelligence in education or the workplace,” says Michiel Bakker, an assistant professor at MIT involved in the study. “AI can clearly help people perform better in the moment, and that can be valuable. But we should be more careful about what kind of help AI provides and when.”

I recently met Bakker, who has a messy hairstyle and a wide smile, on the MIT campus. He comes from the Netherlands, previously worked at Google DeepMind in London. He told me that a famous essay along the way, artificial intelligence can weaken humans over time, inspired him to think about how technology may already be weakening human abilities. The essay is somewhat bleak because it suggests that disempowerment is inevitable. That said, perhaps figuring out how AI can assist people develop their own mental abilities should be part of aligning models with human values.

“It’s fundamentally a cognitive question – about persistence, learning, and how people respond to difficulties,” Bakker says. “We wanted to take these broader concerns about long-term human-AI interaction and explore them in controlled experimental settings.”

The study’s results seem particularly concerning, Bakker says, because the willingness to persist in solving problems is crucial in acquiring novel skills and also predicts a person’s ability to learn over time.

Bakker says it may be necessary to rethink how AI tools work so that, like a good teacher, models sometimes prioritize a person’s learning over solving their problem. “Systems that provide direct responses can have very different long-term effects than systems that support, train or challenge the user,” says Bakker. However, he admits that balancing this kind of “paternalistic” approach can be tough.

Artificial intelligence companies are already thinking about the more subtle effects their models can have on users. The adulation of certain models – that is, the likelihood that they will agree with users and patronize them – is something like this OpenAI has tried to soften its approach with newer versions of GPT.

Putting too much faith in artificial intelligence seems particularly problematic when the tools may not perform as expected. Agentic AI systems are particularly unpredictable because they perform complicated tasks on their own and can introduce strange bugs. It makes you wonder what Claude Code and Codex do to the skills of programmers who may sometimes need to fix bugs they introduce.

Perhaps instead of just trying to solve the problem for me, OpenClaw should have stopped and taught me how to solve the problem myself. As a result, I could have a more proficient computer and brain.


This is the release Will Knight AI Lab Newsletter. Read previous newsletters Here.

Latest Posts

More News