Why do lawyers keep using ChatGPT?

Every few weeks, there appears to be a new title about facing a lawyer in submitting deposits containing, in the words of one of the judges, “The research created by artificial intelligence.” Details differ, but the average line is the same: the lawyer turns into a large language model (LLM) such as ChatGPT to help them in legal research (or worse than that, writing), and hallmos are not present, and the lawyer is not wisely until the judge or the opposition attorney indicates their mistake. In some cases, including a flying lawsuit since 2023, lawyers were forced to pay fines to submit files with hallucinations that are created from artificial intelligence. Why did they not stop?
The answer is mostly due to the time wings, and the way artificial intelligence crept into almost every profession. Legal research databases such as Lexisnexis and Westlaw have artificial intelligence. For lawyers who decrease in large cases, artificial intelligence can look incredibly effective. Most lawyers do not necessarily use Chatgpt to write their files, but they are increasingly using them and other LLMS to search. However, many of these lawyers, like many audiences, do not understand exactly what llms are or how they work. One of the lawyers approved in 2023 said he believed Chatgpt was a “super search engine”. It took a file with fake martyrdoms to reveal that it is very similar to a random phrase-one that can give you either correct information or convincing formulation nonsense.
Andrew Pirman, Dean of law Faculty at Sofolk University, argues that many lawyers use artificial intelligence tools without an accident, and the people who fell in fake martyrdom are extremist values. “I think what we see now – although these hallucinations are real, and lawyers must take it seriously and be careful about that – does not mean that these tools do not have enormous benefits and use cases of legal services.” Legal databases and research systems such as Westlaw integrate artificial intelligence services.
In fact, 63 percent of the lawyers of Thomson Reuters in 2024 said they used artificial intelligence in the past, and 12 percent said they used it regularly. The respondents said that they use artificial intelligence to write summaries of the rules of judicial precedents and research “judicial precedents, laws, models, or sample language for orders.” Lawyers surveyed by Thompson Reuters see it as a tool to save time, and half of the surveyed said, “Explore the potential to implement artificial intelligence” at work is their priority. One of the respondents said: “The good attorney is a” reliable “consultant not as a document producer.
But as many recent examples showed, the documents produced by artificial intelligence are not always accurate, and in some cases are not real at all.
On one of the last prominent cases, journalist Tim Burke’s lawyers, who was arrested for publishing Fox News, not available in 2024, submitted a request to reject the case against him on the first amendment ground. After discovering that the deposit included “great distortions and the mismanagement of the supposed judicial precedents and history”, Judge Catherine Kimal Mizal, from the Central Florida region, ordered the proposal from the case record. Mizelle found nine hallucinations in the document, according to Tamba Bay Times.
Ultimately, Mizellle allowed Burke, Mark Rasch and Michael Maddux, a new proposal. In a separate file that explains the mistakes, RASCH wrote that he “assumes responsibility alone and exclusively for these mistakes.” RASCH said he used the “Deep Search” feature in Chatgpt Pro, which freedom It was previously tested with mixed results, as well as AI’s feature of Westlaw.
Rasch is not alone. Lawyers who represent the Antarbur have recently admitted that CLAUDE AI used the company to help write an expert witness advertisement that was submitted as part of a copyright -violating lawsuit filed by music publishers. This deposit included a quote with “inaccurate address and inaccurate authors. Last December, the wrong information expert, Jeff Hancock, admitted that he used ChatGPT to help organize martyrdom in a declaration to support the Minnesota law regulating the use of Deepfake. The Hancock file included “martyrdom errors, which are popularly referred to as” hallucinations “, and the authors are incorrectly listed in another martyrdom.
These documents, in fact, are important – at least in the eyes of judges. In a recent case, a judge in California, who heads a case against the state farm, was initially affected by the arguments in a summary, only to find that the aforementioned judicial precedents were completely composed. Judge Michael Wellner wrote: “I read their summary, and they were persuaded (or at least fascinated) by the authorities they martyred, and searched for decisions to learn more about them – just to find that they are not present.”
Birman said that there are many risky methods that lawyers use from artificial intelligence in their work, including finding information in large segments of discovery documents, reviewing observations or deposits, possible arguments or potential opposition opinions. “I think in almost every task, there are ways in which childish artificial intelligence can be useful – not a substitute for lawyers, and not an alternative to the experience that lawyers bring to the table, but to complete what the lawyers do and enable them to do their work better, faster and cheaper.”
But like anyone who uses artificial intelligence tools, lawyers who rely on them to help legal research and writing must be keen to verify the work they produce, Perman said. Part of the problem is that lawyers often find themselves on time – an issue that it says was present before Llms entered the picture. “Even before the emergence of the Turedi artificial intelligence, the lawyers are submitting documents with categories that did not really address the case they claimed to take.” “It was just a different type of problems. Sometimes when the lawyers are transferred, they enter the martyrdom, and do not examine them properly; they really don’t see if the case has turned or vetoed.” (However, the cases are at least.)
Another treacherous problem is the fact that lawyers – like others who use LLMS to help research and write – are very confident in what artificial intelligence produces. “I think that many people are hacked to a feeling of comfort with directing, because it seems at first glance to be well,” Perman said.
Alexander Culoden, the election lawyer and representative of the Republican State in Arizona, said he was treating as a classmate at the level of beginners. Chatgpt has also been used to help write legislation. In 2024, he included the text of artificial intelligence in part of a draft law on Deepfakes, and LLM provided the “Definition of the Line” to the deep whatever, then “I, the human being, added to protection for human rights, things like that they exclude comedy, satire, criticism, and artistic expression, this type of things,” Tell Culodine “Culodine” Guardian at that time. Culodine said he “may be” discussing his use of Chatgpt with the main CosPonsor in the draft law, but he wanted to be an “Easter egg” in the draft law. The bill has been passed to the law.
Culodine – who was punished by the Bar Association in Arizona in 2020 because of his participation in the lawsuits that challenged the result of the 2020 elections – ChatGPT to write the first drafts of the amendments, and informed them freedom He uses it for legal research as well. He said that to avoid the problem of hallucinations, it only verifies the categories to ensure that it is real.
“You usually do not send a producer of a young partner without verifying the testimonies,” Culodine said. “Not only the hallucinations of hallucinations; the novice assistant can mislead the issue, they do not really defend the aforementioned proposal anyway, whatever.
Culodine said he uses a “deep search” professional tool in Chatgpt and the Lexisnexis AI tool. Like Westlaw, Lexisnexis is a legal research tool used primarily by lawyers. Culoden said he has a hallucinated rate higher than Chatgpt, who says he “has decreased dramatically over the past year.”
The use of artificial intelligence among lawyers has become so prevalent that in 2024, the American Lawyers Association issued its first instructions on lawyers’ use of LLMS and other artificial intelligence tools.
Lawyers who use artificial intelligence maintain “efficiency duty, including maintaining relevant technological efficiency, which requires an understanding of advanced nature” from obstetric artificial intelligence. The guidelines for lawyers recommend “obtaining a general understanding of the benefits and risks of the Gai tools that they use – or, in other words, do not assume that LLM is a” super search engine “. Lawyers should also weigh the risk of confidentiality to enter information related to their issues in LLMS and consider whether they will tell their customers about their use of LLMS and other artificial intelligence tools.
Perlman is my budget on the use of lawyers for Amnesty International. “I think the Important IQ will be the most influential technology that has ever witnessed the legal profession and that lawyers are expected to use these tools in the future,” he said. “I think we are at some point, we will stop worrying about the efficiency of lawyers who use these tools and start anxiety about the efficiency of lawyers who do not do it.”
Others, including one of the judges who held lawyers to submit files full of hallucinations created from artificial intelligence, is more skeptical. “Even with recent developments,” Wilnener wrote, “No reasonably qualified lawyer should surpass the research and writing of this technology-especially without any attempt to verify the accuracy of that article.”
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-06-01 14:30:00