Tuesday, June 3, 2025

Why do lawyers apply chatgpt?

Share

Every few weeks, it seems that a recent headline appears about getting on a lawyer troubles for submitting applications containing in the words of one judge, “false research generated by AI”. The details differ, but the line is the same: the lawyer turns to a enormous language model (LLM), such as chatgpt, to facilitate them in legal research (or worse, writing), LLM hallucinates matters that do not exist, and the lawyer is not smarter, as long as the judge or opposite advisor does not indicate his mistake. In some cases, including the air process from 2023, lawyers I had to pay a fine for submitting entries with hallucinations generated by AI. So why didn’t they stop?

The answer usually comes down to crises, and the way Ai crept into almost every profession. Databases of legal research, such as Lexisnexis and Westlaw, now have AI integration. For lawyers juggling a massive burden, AI may seem like an extremely productive assistant. Most lawyers do not necessarily apply chatgpt to write their applications, but more and more often apply them and other LLM for research. However, many of these lawyers, like most public opinion, do not understand exactly what LLM is and how they work. One lawyer who was sanctioned In 2023 he said that he thought Chatgpt was “a great search engine”. The submission of folding with false quotes revealed that it is more like a random phrase generator, which can give you the correct information or convincingly formulated nonsense.

Andrew Perlman, dean of Suffolk University Law School, claims that many lawyers apply AI tools without incidents, and those who are caught with false quotes are protruding. “I think that what we see now – although these hallucinations are true, and lawyers must treat it very seriously and watch out – does not mean that these tools do not have the huge benefits and cases of using legal services,” said Perlman. Legal databases and research systems such as Westlaw contain AI services.

In fact, 63 percent of lawyers surveyed by Thomson Reuters In 2024, he said that in the past they used artificial intelligence, and 12 percent stated that he was using it regularly. Respondents have found that they apply artificial intelligence to write jurisprudence summaries and to study “case law, acts, forms or sample language for orders.” Advocates surveyed by Thomson Reuters perceive this as a time tool saving, and half of the respondents said “studying the potential of AI implementation” at work is their highest priority. “The role of a good lawyer is a” trusted adviser “and not as a producer of documents,” said one of the respondents.

But as many last examples showed, the documents presented by artificial intelligence are not always right, and in some cases they are not true.

In one recently high -profile case, lawyer Tim Burke, who was arrested for issuing Fox News in 2024, submitted a request to dismiss the case against him on the basis of the first amendment. After discovering that the notification included “a significant misleading and improper cituitations of the allegedly significant case -law and history”, judge Kathryn Kimballle from the middle district of Florida ordered to affect the application from the record of the case. Mizelle found nine hallucinations in the document, According to Tampa Bay Times.

Mizelle ultimately allows Burke lawyers, Mark Rasch and Michael Maddux, they submitted a recent application. In a separate application explaining errors, Rasch wrote that “he accepts the sole and exclusive responsibility for these errors.” Rasch said he used the “Deep Research” function on Chatgpt Pro, which The Verge Earlier he tested with mixed results, as well as the AI ​​Westlaw function.

Rasch is not alone. Lawyers representing recently anthropic awarded to the use of Claude AI of the company To facilitate write experts of witnesses submitted as part of a lawsuit of infringement of copyright against anthropic by music publishers. This conclusion included a quote with “inaccurate title and inaccurate authors”. In December in December, the disinformation expert Jeff Hancock admitted that he used ChatGPT to facilitate organize quotes in the declaration, which he submitted to support the law in Minnesot regulating the apply of deep fraudulent. Hancock’s conclusion included “two errors of citation, popularly referred to as” hallucinations “and incorrectly listed authors of the next quote.

These documents actually matter – at least in the eyes of judges. In a recent case, the Californian judge presided over the case against State Farm, he was initially swayed with arguments in a short time, but it was found that the cited case -law was completely invented. “I read their brief, I was convinced (or at least intrigued) by the authorities who cited, and checked the decisions to learn more about them – just to say that they did not exist,” wrote judge Michael Wilner.

Perlman said that there are several less risky ways in which lawyers use generative artificial intelligence in their work, including finding information in large tranches of discovering documents, browsing panties or reports and brainstorming possible arguments or possible opposite views. “I think that in almost every task there are ways in which generative AI can be useful – not a substitute for lawyers’ judgment, not a substitute for specialist knowledge that lawyers bring to the table, but to supplement what lawyers do and allow them to perform their work better, faster, faster and cheaper,” said Perlman.

But like everyone uses AI tools, lawyers who rely on them to help in legal research and writing must be careful to check the work done. Part of the problem is that lawyers often do not have time in time – a problem that, he claims, existed before LLM appeared. “Even before the appearance of generative artificial intelligence, lawyers submitted documents with quotes that did not really concern the problem they claimed to deal with,” said Perlman. “It was just a different type of problem. (To say that, cases usually exist.)

Another, more insidious problem is the fact that lawyers – like others who apply LLM to facilitate in research and writing – too trust, which AI produces. “I think that many people are made aware of the sense of convenience with the results, because at first glance it seems to be so well made,” said Perlman.

Alexander Kolodin, an election lawyer and a representative of the Republican State in Arizona, said he was treating chatgpt as a collaborator at a younger level. He also used chatgpt to facilitate write the rules. In 2024, he took into account the text of artificial intelligence in the part of the Act on deep cabinets, having LLM, providing a “basic definition” of what deep cabinets are, and then “I, man, added to the protection of human rights, such things exclude comedy, satire, criticism, artistic expression,” he said The Guardian then. Kolodin said that “maybe” discussed the apply of chatgpt with the main democratic conqueror of the bill, but besides, he wanted it to be an “Easter egg” in the bill. Act adopted.

Kolodin – who was Sanctioned by the Arizona State bar in 2020 For commitment to the lawsuits questioning the election result in 2020 – he also used Chatgpt to write the first amendments and said The Verge He also uses it for legal tests. To avoid the problem of hallucination, he said, he simply checks the quotes to make sure they are real.

“You usually don’t send a work of a younger colleague without checking quotes,” said Kolodin. “They are not only machines that hallucinists; a younger colleague could read the matter badly, it does not really mean the quoted proposal, whatever. You still have to check it, but you still have to do it with a colleague, unless they were quite experienced.”

Kolodin said that he uses both the “Deep Research” chatgpt tool and the Lexisnexis AI tools. Like Westlaw, Lexisnexis is a tool for legal research used primarily by lawyers. Kolodin said that his experience has a higher hallucinations indicator than Chatgpt, which, he claims, “has fallen significantly in the last year.”

The apply of AI among lawyers became so common that in 2024 the American Bar Association He gave his first tips On the apply of LLM and other AI tools.

Lawyers who apply AI tools “are obliged to competence, including maintaining appropriate technological competences, which requires understanding of the evolutionary nature of” generative artificial intelligence, opinion Reads. The guidelines advise lawyers “gaining a general understanding of the benefits and risk of GAI tools”, which they apply – or, in other words, do not assume that LLM is “a great search engine”. Lawyers should also consider the risk of confidentiality in entering information about their matters into LLM and consider whether to tell their clients about using LLM and other AI tools.

Perlman is stubborn in relation to the apply of AI by lawyers. “I think that generative artificial intelligence will be the most influential technology that legal profession has ever seen and that lawyers will expect that they will use these tools in the future,” he said. “I think that at some point we will stop worrying about the competences of lawyers who use these tools and start to worry about the competence of lawyers who do not do it.”

Others, including one of the judges who sanctioned lawyers for submitting a full hallucinations generated by AI, are more skeptical. “Even with recent achievements”, “Wilner,” None, “No reasonably competent lawyer should move the research and writing to this technology-especially without any attempt to verify the accuracy of this material.”

Latest Posts

More News