The Verge - Artificial Intelligences 2024年12月05日
Misinformation researcher admits ChatGPT added fake details to his court filing
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

斯坦福大学社会媒体实验室创始人Jeff Hancock在提交的一份法律文件中,被发现存在虚假引用,随后承认使用了ChatGPT来整理文献引用,导致出现“幻觉”错误。该文件是为了支持明尼苏达州一项打击深度伪造技术影响选举的法律,而该法律正受到联邦法院的挑战。Hancock表示,这些错误并未改变声明的实质内容,他坚持其观点,并对由此造成的任何困惑表示歉意。事件引发了人们对AI工具在法律文件撰写中应用的担忧,也凸显了AI模型可能存在的局限性和潜在风险。

🤔 **专家Jeff Hancock在法律文件中使用ChatGPT整理引用,导致出现错误引用。**Hancock承认在支持明尼苏达州“使用深度伪造技术影响选举”法律的文件中,使用了ChatGPT来帮助整理参考文献,并导致出现“幻觉”错误,即生成不存在的引用信息。

🔎 **对方律师质疑文件可靠性,要求排除该文件。**在发现Hancock提交的文件中存在虚假引用后,挑战该法律的律师认为该文件不可靠,并要求将其排除在法庭审理之外。

📝 **Hancock否认使用AI写作,坚持声明核心观点。**Hancock在后续声明中表示,他只是利用ChatGPT来整理引用,并非用它来撰写文件内容。他强调自己撰写并审查了声明的实质内容,并坚信所有观点都得到了最新学术研究的支持。

⚠️ **事件引发对AI工具在法律文件撰写中应用的担忧。**Hancock事件引发了人们对AI工具在法律文件撰写中应用的担忧,提醒人们注意AI模型可能存在的局限性和潜在风险,包括生成虚假信息、造成误导等。

🙏 **Hancock对错误表示歉意,但坚持声明核心内容。**Hancock表示对事件造成的任何困惑感到抱歉,但同时坚持声明中的实质内容,并认为这些内容得到了学术研究的支持。

Image: The Verge

A misinformation expert accused of using AI to generate a legal document admitted he used ChatGPT to help him organize his citations, leading to “hallucinations” that critics said called the entire filing into question. Jeff Hancock, the founder of the Stanford Social Media Lab who wrote the document, says the errors don’t change the “substantive points in the declaration.”

Hancock submitted the affidavit in support of Minnesota’s “Use of Deep Fake Technology to Influence an Election” law, which is being challenged in federal court by Christopher Khols — a conservative YouTuber who posts under the name Mr Reagan — and Minnesota state Rep. Mary Franson. After discovering that Hancock’s filing seemed to contain citations that didn’t exist, attorneys for Khols and Franson said it was “unreliable” and asked that it be excluded from consideration.

In a subsequent declaration filed late last week, Hancock acknowledged that he used ChatGPT to draft the declaration but denies he used it to write anything. “I wrote and reviewed the substance of the declaration, and I stand firmly behind each of the claims made in it, all of which are supported by the most recent scholarly research in the field and reflect my opinion as an expert regarding the impact of AI technology on misinformation and its societal effects,” Hancock wrote.

As for the citation errors, Hancock explained that he used Google Scholar and GPT-4o “to identify articles that were likely to be relevant to the declaration so that I could merge that which I knew already with new scholarship.” Hancock says he used GPT-4o to create a citation list, not to write the document, and didn’t realize the tool generated “two citation errors, popularly referred to as ‘hallucinations’” and added incorrect authors to another citation.

“I did not intend to mislead the Court or counsel,” Hancock wrote in his most recent filing. “I express my sincere regret for any confusion this may have caused. That said, I stand firmly behind all the substantive points in the declaration.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

ChatGPT AI 法律文件 虚假引用 深度伪造
相关文章