A IA customer service chatbot was a business policy and has created a waste

by admin
A IA customer service chatbot was a business policy and has created a waste

Monday, a developer using the popular AI propelled code editor Cursor Noticed something strange: to switch between the machines instantly recorded them, breaking a common workflow for programmers who use several devices. When the user contacted the cursor support, an agent named “SAM” told them that it was expected in a new policy. But there was no policy of this type, and Sam was a bot. The AI ​​model has made politics, triggering a wave of documented complaints and threats to Pirate news And Reddit.

This marks the last instance of AI confabulation (Also called “hallucinations”) causing potential damage. Confabulations are a type of response “creative full fill” where AI models invent information with plausible but false consonance. Instead of admitting uncertainty, AI models often prioritize the creation of plausible and confident responses, even when it means manufacturing information from zero.

For companies that deploy these systems in customer -oriented roles without human supervision, the consequences can be immediate and costly: frustrated customers, damaged confidence and, in the case of Cursor, potentially canceled subscriptions.

How it went

The incident started when a Reddit user named BrokentoasterOvenoStero REMARK That while exchanging between a desk, a laptop and a remote development box, the cursor sessions were ended unexpectedly.

“Connection to the cursor to a machine immediately invalidates the session on any other machine,” wrote BrokentoasterOven in a message that was Later deleted by R / Cursor moderators. “This is a significant UX regression.”

Confused and frustrated, the user wrote an e-mail to cursor care and quickly received a response from SAM: “Cursor is designed to operate with a subscription device as security functionality”, read the answer by email. The answer seemed final and official, and the user did not suspect that Sam was not human.

After the initial Reddit post, users took the post as an official confirmation of a real policy change – one that broke the essential habits with daily routines of many programmers. “Multi-Appareils workflows are table issues for developers,” wrote a user.

Shortly after, several users have publicly announced their subscription cancellations on Reddit, citing nonexistent policy as a reason. “I literally canceled my submarine,” wrote Reddit's original poster, adding that their workplace was now “purging it completely”. Others joined: “Yes, I also cancel, it's Asinine.” Shortly after, the moderators locked the Reddit wire and removed the original post.

“Hey! We don't have such a policy,” wrote A cursor representative in a Reddit response three hours later. “You are of course free to use the cursor on several machines. Unfortunately, this is an incorrect response of an AI support bot.”

AI conformulations as a commercial risk

The cursor debacle recalls a similar episode From February 2024, when Air Canada was ordered to honor a reimbursement policy invented by its own chatbot. In this incident, Jake Moffatt contacted Air Canada's support after the death of his grandmother, and the AI ​​agent of the airline told him wrongly that he could reserve a regular price flight and ask for mourning rates retroactively. When Air Canada then denied its reimbursement request, the company argued that “the chatbot is a separate legal entity responsible for its own actions”. A Canadian court rejected this defense, judging that companies are responsible for the information provided by their AI tools.

Rather than challenging responsibility as Air Canada did, Cursor recognized the error and took measures to make amends. The co -founder of the cursor Michael Truell later Excused for the news of the pirates For confusion on nonexistent policy, explaining that the user had been reimbursed and that the problem results from a change of backend intended to improve the safety of the session which has involuntarily created session invalidation problems for certain users.

“All AI responses used for e-mail assistance are now clearly labeled as such,” he added. “We use the AIs assisted by AI as the first filter for e-mail assistance.”

However, the incident raised persistent questions about the disclosure of users, because many people who interacted with Sam apparently thought it was human. “LLMS pretending to be people (you have named Sam!) And not labeled as such is clearly intended to be misleading”, a user, a user Writing on Hacker News.

Although the cursor corrects the technical bug, the episode shows the risks of deployment of AI models in customer -oriented roles without guarantees and appropriate transparency. For a company that sells AI productivity tools to developers, the fact that its own AI support system invents a policy that alienated its main users represents a particularly clumsy self-inflicted injury.

“There is a certain irony that people really try to say that hallucinations are no longer a big problem”, a user Writing on Hacker News“And then a company that would benefit from this story is injured directly.”

This story originally appeared on Ars Technica.

Source Link

You may also like

Leave a Comment