On Monday, a programmer using a popular AI code code editor Cursor I noticed something strange: switching between machines immediately logged them, interrupting a common flow of work for programmers using many devices. When the user contacted the cursor service, the agent called “Sam” told them that behavior was expected under the novel rule. But there was no such policy, and he was a bot. The AI model raised a politician, causing a wave of complaints and threats from cancellation Hacker News AND Reddit.
This is the latest AI instance confabulations (also called “hallucinations”) causing potential business damage. Confabulations are a kind of “creative filling of gaps” in which AI models invent likely, but false information. Instead of admitting uncertainty, AI models often prioritize the creation of reliable, confident answers, even if it means the production of information from scratch.
In the case of companies implementing these systems in roles addressed to the client without human supervision, the consequences can be immediate and exorbitant: frustrated customers, damaged trust and, in the case of a cursor, potentially canceled subscriptions.
How it developed
The incident began when the Reddit user named Brozentoasteroven noticed That during the exchange between desktop computers, a laptop and a distant development box, the cursor sessions were unexpectedly completed.
“Logging in to the cursor on one computer immediately invalidates the session on any other computer,” wrote Brockentoasteroven in a message that was Later it was removed by Moderators R/Cursor. “This is a significant UX regression.”
Mixed and frustrated, the user wrote E -Mail to Cursor Support and quickly received the answer from the same one: “The cursor is designed to work with one device for subscription as a basic safety function”, read the answer E -Mail. The answer was ultimately and officially, and the user did not suspect that he was not human.
After the initial post of Reddit, users took the position as official confirmation of the actual change of policy – one that broke the habits necessary for the daily routine of many programmers. “The flows of work with many devices are table rates for developers,” one user has eaten.
Shortly afterwards, several users publicly announced the cancellation of the subscription to Reddit, citing non -existent policy as their reason. “I literally just canceled my SUB,” wrote the original Reddit poster, adding that their workplace “completely removed it”. Others joined: “Yes, I also cancel, this is Asinine.” Shortly afterwards, the moderators blocked the Reddit thread and removed the original post.
“Hey! We don’t have such a policy” wrote Cursor representative in Reddit response three hours later. “Of course, you can freely use the cursor on many machines. Unfortunately, this is an incorrect answer from the Bot of the AI of the First Line.”
And confabulations as a business risk
The cursor’s advice resembles A similar episode From February 2024, when Air Canada was ordered to honor the refund policy invented by its own chatbot. In this incident, Jake Moffatt contacted the support of Air Canada after the death of his grandmother, and the agent AI of the airline incorrectly told him that he could book a flight regularly and apply for mourning rates with retrospective power. When Air Canada later refused to request a refund, the company argued that “chatbot is a separate legal entity responsible for its own actions.” The Canadian Court rejected this defense, ruling that companies are responsible for information provided by their AI tools.
Instead of challenging the responsibility, as Air Canada did, Cursor confirmed the error and took steps to change. Cursor co -founder Michael Truell later He apologized in the Hacker news In order to confusion in the scope of non -existent rules, explaining that the user was returned and the problem resulted from a change in facilities aimed at improving the safety of sessions, which inadvertently created problems with annulment of the session for some users.
“All AI answers used to support E -Mail are now clearly marked as such,” he added. “We use AI-AI answers as the first e-mail filter.”
Despite this, the incident raised persistent questions about disclosure among users, because many people who were interacting with Sam, apparently thought it was a man. “LLM pretending to be people (you called him alone!), And not marked as such is clearly intended”, one user Wrote about Hacker News.
While the cursor repaired a technical error, the episode shows the risk of implementing AI models in roles addressed to customers without adequate security and transparency. In the case of a company selling artificial intelligence productivity tools for programmers, having their own support system invent a policy that alienated its primary users is a particularly awkward independent wound.
“There is some irony that people really try to say that hallucinations are no longer a big problem,” one user Wrote about Hacker News“And then a company that would benefit from this narrative is wounded by it.”
This story originally appeared Ars Technica.