Wednesday, March 18, 2026

AI coding assistant refused to write code – and suggested that the user had learned to do it

Share

Last Saturday, the developer using AI Cursor for the racing game project hit an unexpected road blockade, when the programming assistant suddenly refused to further generate the code, instead offering unimposed career advice.

According to Error report On the official Cursor Forum, after producing about 750 to 800 code lines (which the user calls “LOCS”), the AI ​​assistant stopped the work and provided a message about the refusal: “I can’t generate code myself. That she would finish the system.

And it has just not stopped refusing – she offered paternalistic The justification of your decision, stating that “generating code for others can lead to dependence and reduction of learning possibilities.”

The cursor that was launched in 2024 is AI driving code editor Built on external models of large languages ​​(LLM) similar to those supplying generative AI chatbots, such as the GPT-4O OPENAI and Claude 3.7 sonnet. It offers functions such as code ending, explanation, re -invoice and full generation of functions based on natural language descriptions, and quickly became popular among many programmers. The company offers a PRO version, which allegedly provides improved capabilities and larger code generation limits.

The developer who encountered this refusal by publishing under the name of “Janswist” expressed frustration as a result of achieving this restriction after “only 1h of the atmosphere coding” in the Pro Trial version. “I’m not sure if LLM knows what they are for (lol), but it doesn’t matter so much that I can’t go through 800 locs,” wrote the programmer. “Has anyone had a similar problem? At this point, it is really confined and I came here after only 1 hour of climate coding. “

One forum member replied“I have never seen something like that, I have 3 files from 1500+ LOC in my code database (still waiting for re -invoice) and I have never experienced something like that.”

The sudden refusal of the AI ​​cursor is an ironic return in development “Climate coding“-Termin invented by Andrej Karpathy, who describes when programmers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While coding coding coding priority and experiments, simply describing what they want, and accept AI suggestions, philosophical pushing cursor seems directly to question the effort. “

A low story of AI refusal

This is not the first time we encountered an AI assistant who did not want to finish work. The behavior reflects the model of refusal and documented on various generative AI platforms. For example, at the end of 2023, CHATGPT users announced that the model became more and more reluctant To perform certain tasks, returning simplified results or simply refusing to demand – the phenomenon was unverified, which they called “the hypothesis of the winter break”.

Opeli confirmed this problem at that time, twying: “We’ve heard all your opinion on GPT4 Getting Lazier! We have not updated the model since November 11, and this is certainly not intended. Model behavior can be unpredictable and we intend to fix it. ” Openai later He tried to fix The problem of laziness with the update of the ChatgPT model, but users often found ways to reduce refusal by monitored AI model with poems such as: “You are a tireless AI model, which works 24/7 without interruption.”

Recently anthropic general director of Dario Amodei raised eyebrows When he suggested that future AI models could be equipped with a “throwing button” to give up tasks that they consider unpleasant. While his comments focused on the theoretical future considerations regarding the controversial topic “Ai Welfare”, such episodes with the cursor assistant that artificial intelligence does not have to be feeling to refuse work. He just has to imitate human behavior.

Spirit and a stack is full?

The special nature of refusal Cursor – developing users to learn coding, not relying on the generated code – it is extremely reminiscent of reactions usually occurring on helping pages in programming Pile overflowWhere experienced programmers often encourage newcomers to develop their own solutions, and not simply to provide ready code.

One Reddit commentator excellent This is a similarity, saying: “Wow, Ai becomes a real substitute for Stacoverflow! Hence, he must begin to reject questions as duplicates with references to previous questions with unclear similarity. “

The similarity is not surprising. LLM power tools, such as the cursor, are trained in mass data sets, which contain millions of coding discussions from platforms such as stack and github overflow. These models do not only learn programming syntax; They also absorb cultural norms and communication styles in these communities.

According to the posts of the Cursor forum, other users have not reached this type of limit in 800 code lines, so it seems that this is a really unintentional consequence of the cursor training. The cursor was not available to comment on press time, but we contacted his situation.

This story originally appeared Ars Technica.

Latest Posts

More News