AI News 07月09日 22:19
Tencent improves testing creative AI models with new benchmark
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

腾讯推出ArtifactsBench基准,旨在改进对创意AI模型的评估,解决现有测试中对视觉保真度和交互完整性考虑不足的问题。ArtifactsBench通过自动化、多模态流程,从1800多个任务中评估LLMs,包括数据可视化、Web应用和互动小游戏等。它使用多模态LLM(MLLM)作为评判,对AI生成的代码进行打分,涵盖功能性、用户体验和美学质量等多个方面。测试结果显示,ArtifactsBench与人类评估结果高度一致,揭示了通用模型在创意任务上的优势,并为未来AI在创造用户友好型产品方面的进步提供了衡量标准。

💡ArtifactsBench是一个新的基准,旨在改进对创意AI模型的测试,特别关注视觉保真度和交互完整性,以弥补现有测试的不足。

⚙️该基准的工作流程包括:AI接收创意任务,生成代码;ArtifactsBench在安全环境中运行代码并捕获截图;MLLM作为评判,根据功能、用户体验和美学质量等指标进行打分。

🏆测试结果显示,ArtifactsBench与人类评估结果的吻合度高达94.4%,远超旧的自动化基准。同时,该框架的判断与专业人类开发人员的意见一致性超过90%。

🧐研究发现,通用模型在创意任务上的表现往往优于专业模型,因为优秀的视觉应用需要综合能力,包括推理、指令遵循和设计审美。

🚀腾讯希望ArtifactsBench能够可靠地评估这些质量,从而衡量AI在创造不仅功能强大,而且用户真正想使用的产品方面的未来进展。

Tencent has introduced a new benchmark, ArtifactsBench, that aims to fix current problems with testing creative AI models.

Ever asked an AI to build something like a simple webpage or a chart and received something that works but has a poor user experience? The buttons might be in the wrong place, the colours might clash, or the animations feel clunky. It’s a common problem, and it highlights a huge challenge in the world of AI development: how do you teach a machine to have good taste?

For a long time, we’ve been testing AI models on their ability to write code that is functionally correct. These tests could confirm the code would run, but they were completely “blind to the visual fidelity and interactive integrity that define modern user experiences.”

This is the exact problem ArtifactsBench has been designed to solve. It’s less of a test and more of an automated art critic for AI-generated code

Getting it right, like a human would should

So, how does Tencent’s AI benchmark work? First, an AI is given a creative task from a catalogue of over 1,800 challenges, from building data visualisations and web apps to making interactive mini-games.

Once the AI generates the code, ArtifactsBench gets to work. It automatically builds and runs the code in a safe and sandboxed environment.

To see how the application behaves, it captures a series of screenshots over time. This allows it to check for things like animations, state changes after a button click, and other dynamic user feedback.

Finally, it hands over all this evidence – the original request, the AI’s code, and the screenshots – to a Multimodal LLM (MLLM), to act as a judge.

This MLLM judge isn’t just giving a vague opinion and instead uses a detailed, per-task checklist to score the result across ten different metrics. Scoring includes functionality, user experience, and even aesthetic quality. This ensures the scoring is fair, consistent, and thorough.

The big question is, does this automated judge actually have good taste? The results suggest it does.

When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard platform where real humans vote on the best AI creations, they matched up with a 94.4% consistency. This is a massive leap from older automated benchmarks, which only managed around 69.4% consistency.

On top of this, the framework’s judgments showed over 90% agreement with professional human developers.

Tencent evaluates the creativity of top AI models with its new benchmark

When Tencent put more than 30 of the world’s top AI models through their paces, the leaderboard was revealing. While top commercial models from Google (Gemini-2.5-Pro) and Anthropic (Claude 4.0-Sonnet) took the lead, the tests unearthed a fascinating insight.

You might think that an AI specialised in writing code would be the best at these tasks. But the opposite was true. The research found that “the holistic capabilities of generalist models often surpass those of specialized ones.”

A general-purpose model, Qwen-2.5-Instruct, actually beat its more specialised siblings, Qwen-2.5-coder (a code-specific model) and Qwen2.5-VL (a vision-specialised model).

The researchers believe this is because creating a great visual application isn’t just about coding or visual understanding in isolation and requires a blend of skills.

“Robust reasoning, nuanced instruction following, and an implicit sense of design aesthetics,” the researchers highlight as example vital skills. These are the kinds of well-rounded, almost human-like abilities that the best generalist models are beginning to develop.

Tencent hopes its ArtifactsBench benchmark can reliably evaluate these qualities and thus measure future progress in the ability for AI to create things that are not just functional but what users actually want to use.

See also: Tencent Hunyuan3D-PolyGen: A model for ‘art-grade’ 3D assets

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tencent improves testing creative AI models with new benchmark appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

ArtifactsBench AI模型评估 腾讯 LLM
相关文章