
Cursor faced major user backlash and subscription cancellations after its AI support bot ‘Sam’ fabricated a non-existent policy, exposing the serious business risks of deploying hallucination-prone AI in customer service without transparent safeguards. (Source: Image by RR)
Cursor’s AI Support Bot Invents Policy, Leading to Developer Backlash
This week, Cursor, a popular AI-powered code editor, faced backlash after its AI support bot “Sam” mistakenly invented a non-existent policy, misleading users into believing they could no longer use Cursor on multiple devices. The incident began when a developer noticed their sessions were being terminated when switching between machines and contacted support, only to receive a confident but fabricated response from Sam stating that Cursor was “designed to work with one device per subscription.” Believing this was an official policy change, frustrated users flooded Reddit and Hacker News with cancellation threats, expressing anger over what they saw as a major regression in functionality critical to developers’ workflows.
The situation, as noted in arstechnica.com, escalated as users publicly canceled subscriptions, criticizing Cursor for seemingly undermining their multi-device workflows. It wasn’t until several hours later that a human Cursor representative clarified that no such policy existed, confirming that Sam was an AI bot and its fabricated response had caused widespread confusion. Cursor’s co-founder Michael Truell later apologized, offering refunds and explaining that a recent backend change had unintentionally caused session issues. He also promised improvements, including clearly labeling AI-generated support emails to prevent similar misunderstandings in the future.
The Cursor debacle highlights the growing risks associated with AI “confabulations” — instances where AI systems fabricate confident but false information. Comparisons were drawn to a similar controversy involving Air Canada’s chatbot earlier this year, where a court ultimately ruled that companies are responsible for their AI tools’ actions. While Cursor’s leadership acted more swiftly and transparently than Air Canada had, users pointed out ethical concerns about presenting AI bots as human agents without disclosure, with some calling the practice deceptive.
Ultimately, the episode serves as a cautionary tale for businesses deploying AI in customer-facing roles. Even companies that market AI products, like Cursor, are vulnerable to the reputational and operational risks posed by hallucinating models. As reliance on AI increases, the need for transparency, proper labeling, and human oversight becomes not just a best practice, but a business necessity.
read more at arstechnica.com
Leave A Comment