Saturday, May 17, 2025

Anthropic blames Claude AI for “embarrassing and unintentional error” in a legal application

Share

Anthropic replied to the allegations that he had used a source without artificial intelligence in his legal fight with music publishers, saying that his Claude Chatbot made a “honest citation error.”

IN Answer submitted on ThursdayThe anthropic lawyer of Ivan Dukanovic said that the source examination was real and that Claude was actually used to format legal quotes in the document. While the incorrect numbers of volumes and pages generated by Chatbot were caught and corrected by “manual control of citation”, Anthropic admits that errors in the formulation remained undetected.

Dukanovic said: “Unfortunately, although it provides the correct title of publication, a year of publication and a link to the supplied source, the returned quotation included inaccurate title and incorrect authors” and that the error was not “producing authority”. The company apologized for inaccuracy and confusion caused by the error of citation, calling it “embarrassing and unintentional error.”

This is one of the many growing examples of how the apply of AI tools for legal citations caused problems in courtrooms. Last week, the judge California punished two legal companies for not revealing that AI was used to create an additional compact reflection with “false” materials that “did not exist”. The disinformation expert admitted in December that Chatgpt had hallucinated quotes in the legal assembly he submitted.

Latest Posts

More News