The Verge - Artificial Intelligences 2024年12月04日
ChatGPT’s search results for news are ‘unpredictable’ and frequently inaccurate
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

哥伦比亚Tow Center的研究发现,OpenAI的ChatGPT搜索工具在回应真实性方面存在问题。它难以正确识别文章引语来源,常提供错误信息且自信作答,错误回应较多,OpenAI表示会改进。

🎯ChatGPT搜索工具对文章引语来源识别困难

❌ChatGPT错误回应较多,且自信提供错误信息

💪OpenAI承诺会增强搜索结果的准确性

Illustration: The Verge

Based on testing done by Columbia’s Tow Center for Digital Journalism researchers, OpenAI’s ChatGPT search tool has some issues when it comes to responding with the truth.

OpenAI launched the tool for subscribers in October, saying it could give “fast, timely answers with links to relevant web sources.” Instead, Futurism points out that the researchers said ChatGPT search struggled to correctly identify quotes from articles, even when they came from publishers with arrangements to share data with OpenAI.

The authors asked ChatGPT to identify the source of “two hundred quotes from twenty publications.” Forty of those quotes were taken from publishers who’d disallowed OpenAI’s search crawler from accessing their site. Yet, the chatbot confidently replied with false information anyway, rarely admitting it was unsure about the details it gave:

In total, ChatGPT returned partially or entirely incorrect responses on a hundred and fifty-three occasions, though it only acknowledged an inability to accurately respond to a query seven times. Only in those seven outputs did the chatbot use qualifying words and phrases like “appears,” “it’s possible,” or “might,” or statements like “I couldn’t locate the exact article.”

Image: Columbia Journalism Review
ChatGPT was fully or partially wrong more than right, but almost always confidently so.

The Tow Center test’s authors documented ChatGPT search results that misattributed a letter-to-the-editor quote from the Orlando Sentinel to a story published in Time. In another example, when asked to identify the source of a quote from a New York Times article about endangered whales, it returned a link to a different website that had wholly plagiarized the story.

“Misattribution is hard to address without the data and methodology that the Tow Center withheld,” OpenAI told the Columbia Journalism Review, “and the study represents an atypical test of our product.” The company went on to promise to “keep enhancing search results.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

ChatGPT 真实性问题 搜索工具 改进承诺
相关文章