The Verge - Artificial Intelligences 05月14日 06:34
Judge slams lawyers for ‘bogus AI-generated research’
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

加州法官因两家律师事务所未披露使用AI生成法律文书,且文书中包含大量虚假、不准确、误导性法律引用和引言,对其处以3.1万美元的罚款。法官指出,律师不应将研究和写作外包给AI。案件中,原告律师使用AI生成补充摘要大纲,包含虚假AI生成的研究,发送给另一家律所K&L Gates,后者未核实便将其加入文书。法官发现至少两处引用不存在,要求澄清后,律所重新提交的文书包含更多虚构的引用和引言。律师承认使用了Google Gemini和Westlaw Precision with CoCounsel。

⚖️加州法官因律师事务所未公开使用AI生成法律文书,且文书内容存在虚假引用,处以3.1万美元罚款,强调律师不应将研究和写作外包给AI。

🤖原告律师在民事诉讼中使用了AI工具Google Gemini和Westlaw Precision with CoCounsel生成补充摘要大纲,但其中包含了“虚假的AI生成研究”,导致文书内容失实。

⚠️K&L Gates律师事务所在收到包含AI生成内容的摘要后,未进行充分的引用核查,便将未经核实的信息加入到文书中,从而导致了更严重的错误和误导。

🚨法官在审查文书时发现,至少两处引用的法律权威根本不存在,进一步调查后发现更多虚构的引用和引言,暴露出AI在法律研究中的潜在风险。

A California judge slammed a pair of law firms for the undisclosed use of AI after he received a supplemental brief with “numerous false, inaccurate, and misleading legal citations and quotations.” In a ruling submitted last week, Judge Michael Wilner imposed $31,000 in sanctions against the law firms involved, saying “no reasonably competent attorney should out-source research and writing” to AI, as pointed out by law professors Eric Goldman and Blake Reid on Bluesky.

“I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Milner writes. “That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order.”

As noted in the filing, a plaintiff’s legal representative for a civil lawsuit against State Farm used AI to generate an outline for a supplemental brief. However, this outline contained “bogus AI-generated research” when it was sent to a separate law firm, K&L Gates, which added the information to a brief. “No attorney or staff member at either firm apparently cite-checked or otherwise reviewed that research before filing the brief,” Judge Milner writes.

When Judge Milner reviewed the brief, he found that “at least two of the authorities cited do not exist at all.” After asking K&L Gates for clarification, the firm resubmitted the brief, which Judge Milner said contained “considerably more made-up citations and quotations beyond the two initial errors.” He then issued an Order to Show Cause, resulting in lawyers giving sworn statements that confirm the use of AI. The lawyer who created the outline admitted to using Google Gemini, as well as the AI legal research tools in Westlaw Precision with CoCounsel.

This isn’t the first time lawyers have been caught using AI in the courtroom. Former Trump lawyer Michael Cohen cited made-up court cases in a legal document after mistaking Google Gemini, then called Bard, as “a super-charged search engine” rather than an AI chatbot. A judge also found that lawyers suing a Colombian airline included a slew of phony cases generated by ChatGPT in their brief.

“The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong,” Judge Milner writes. “And sending that material to other lawyers without disclosing its sketchy AI origins realistically put those professionals in harm’s way.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI法律 法律伦理 AI风险 法律文书 引用核查
相关文章