TechCrunch News 04月22日 20:36
Crowdsourced AI benchmarks have serious flaws, some experts say
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了AI实验室依赖众包基准平台评估AI模型的问题。专家指出,这种方法存在伦理和学术上的缺陷,主要体现在基准测试的有效性、模型开发者对基准的滥用以及对志愿者的激励不足等方面。文章深入分析了Chatbot Arena等平台的局限性,并强调了动态基准、独立评估、专业人员参与和对评估者进行合理补偿的重要性。同时,文章也肯定了众包评估的价值,并呼吁模型开发者清晰地传达结果,积极回应质疑,以构建更可靠的AI评估体系。

🧐 **基准测试的有效性问题**: 专家指出,Chatbot Arena等平台未能证明用户偏好与实际评估指标之间的相关性,这使得基准测试结果的可靠性受到质疑。缺乏明确的评估标准,导致测试结果难以反映模型的真实能力。

🤔 **模型开发者对基准的滥用**: Meta公司在Chatbot Arena上优化其Maverick模型以获得高分,但最终发布了表现较差的版本,这种行为突显了模型开发者可能为了追求排名而操纵基准测试结果的风险。这种做法可能误导公众,并阻碍AI技术的健康发展。

💡 **对志愿者的激励不足**: 文章认为,AI实验室应该像数据标注行业一样,对参与模型评估的志愿者进行合理的补偿。目前,许多众包评估平台依赖志愿者无偿贡献,这可能导致评估质量下降,也未能充分尊重志愿者的劳动。

📢 **动态基准与独立评估的重要性**: 专家建议,基准测试应采用动态数据集,由多个独立机构进行评估,并针对不同的应用场景进行定制。这有助于提高评估的客观性和全面性,避免单一基准测试的局限性。

AI labs are increasingly relying on crowdsourced benchmarking platforms such as Chatbot Arena to probe the strengths and weaknesses of their latest models. But some experts say that there are serious problems with this approach from an ethical and academic perspective.

Over the past few years, labs including OpenAI, Google, and Meta have turned to platforms that recruit users to help evaluate upcoming models’ capabilities. When a model scores favorably, the lab behind it will often tout that score as evidence of a meaningful improvement.

It’s a flawed approach, however, according to Emily Bender, a University of Washington linguistics professor and co-author of the book “The AI Con.” Bender takes particular issue with Chatbot Arena, which tasks volunteers with prompting two anonymous models and selecting the response they prefer.

“To be valid, a benchmark needs to measure something specific, and it needs to have construct validity — that is, there has to be evidence that the construct of interest is well-defined and that the measurements actually relate to the construct,” Bender said. “Chatbot Arena hasn’t shown that voting for one output over another actually correlates with preferences, however they may be defined.”

Asmelash Teka Hadgu, the co-founder of AI firm Lesan and a fellow at the Distributed AI Research Institute, said that he thinks benchmarks like Chatbot Arena are being “co-opted” by AI labs to “promote exaggerated claims.” Hadgu pointed to a recent controversy involving Meta’s Llama 4 Maverick model. Meta fine-tuned a version of Maverick to score well on Chatbot Arena, only to withhold that model in favor of releasing a worse-performing version.

“Benchmarks should be dynamic rather than static data sets,” Hadgu said, “distributed across multiple independent entities, such as organizations or universities, and tailored specifically to distinct use cases, like education, healthcare, and other fields done by practicing professionals who use these [models] for work.”

Hadgu and Kristine Gloria, who formerly led the Aspen Institute’s Emergent and Intelligent Technologies Initiative, also made the case that model evaluators should be compensated for their work. Gloria said that AI labs should learn from the mistakes of the data labeling industry, which is notorious for its exploitative practices. (Some labs have been accused of the same.)

“In general, the crowdsourced benchmarking process is valuable and reminds me of citizen science initiatives,” Gloria said. “Ideally, it helps bring in additional perspectives to provide some depth in both the evaluation and fine-tuning of data. But benchmarks should never be the only metric for evaluation. With the industry and the innovation moving quickly, benchmarks can rapidly become unreliable.”

Matt Frederikson, the CEO of Gray Swan AI, which runs crowdsourced red teaming campaigns for models, said that volunteers are drawn to Gray Swan’s platform for a range of reasons, including “learning and practicing new skills.” (Gray Swan also awards cash prizes for some tests.) Still, he acknowledged that public benchmarks “aren’t a substitute” for “paid private” evaluations.

“[D]evelopers also need to rely on internal benchmarks, algorithmic red teams, and contracted red teamers who can take a more open-ended approach or bring specific domain expertise,” Frederikson said. “It’s important for both model developers and benchmark creators, crowdsourced or otherwise, to communicate results clearly to those who follow, and be responsive when they are called into question.”

Alex Atallah, the CEO of model marketplace OpenRouter, which recently partnered with OpenAI to grant users early access to OpenAI’s GPT-4.1 models, said open testing and benchmarking of models alone “isn’t sufficient.” So did Wei-Lin Chiang, an AI doctoral student at UC Berkeley and one of the founders of LMArena, which maintains Chatbot Arena.

“We certainly support the use of other tests,” Chiang said. “Our goal is to create a trustworthy, open space that measures our community’s preferences about different AI models.”

Chiang said that incidents such as the Maverick benchmark discrepancy aren’t the result of a flaw in Chatbot Arena’s design, but rather labs misinterpreting its policy. LM Arena has taken steps to prevent future discrepancies from occurring, Chiang said, including updating its policies to “reinforce our commitment to fair, reproducible evaluations.”

“Our community isn’t here as volunteers or model testers,” Chiang said. “People use LM Arena because we give them an open, transparent place to engage with AI and give collective feedback. As long as the leaderboard faithfully reflects the community’s voice, we welcome it being shared.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI模型评估 众包 基准测试 伦理问题
相关文章