MarkTechPost@AI 04月21日 15:15
ByteDance Releases UI-TARS-1.5: An Open-Source Multimodal AI Agent Built upon a Powerful Vision-Language Model
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

字节跳动推出多模态代理框架UI-TARS-1.5,专注于GUI交互和游戏环境。该模型在多种基准测试中表现出色,具有多种优势,并开源提供多种部署选项。

💻UI-TARS-1.5是多模态代理框架,专注GUI交互和游戏环境

🎯在多种基准测试中超越领先模型,表现出色

🌟具有多种架构和训练增强,如感知推理集成等

🔓开源且提供多种部署选项及相关工具

ByteDance has released UI-TARS-1.5, an updated version of its multimodal agent framework focused on graphical user interface (GUI) interaction and game environments. Designed as a vision-language model capable of perceiving screen content and performing interactive tasks, UI-TARS-1.5 delivers consistent improvements across a range of GUI automation and game reasoning benchmarks. Notably, it surpasses several leading models—including OpenAI’s Operator and Anthropic’s Claude 3.7—in both accuracy and task completion across multiple environments.

The release continues ByteDance’s research direction of building native agent models, aiming to unify perception, cognition, and action through an integrated architecture that supports direct engagement with GUI and visual content.

A Native Agent Approach to GUI Interaction

Unlike tool-augmented LLMs or function-calling architectures, UI-TARS-1.5 is trained end-to-end to perceive visual input (screenshots) and generate native human-like control actions, such as mouse movement and keyboard input. This positions the model closer to how human users interact with digital systems.

UI-TARS-1.5 builds on its predecessor by introducing several architectural and training enhancements:

These improvements collectively enable UI-TARS-1.5 to support long-horizon interaction, error recovery, and compositional task planning—important capabilities for realistic UI navigation and control.

Benchmarking and Evaluation

The model has been evaluated on several benchmark suites that assess agent behavior in both GUI and game-based tasks. These benchmarks offer a standard way to assess model performance across reasoning, grounding, and long-horizon execution.

GUI Agent Tasks

Visual Grounding and Screen Understanding

These results show consistent improvements in screen understanding and action grounding, which are critical for real-world GUI agents.

Game Environments

Accessibility and Tooling

UI-TARS-1.5 is open-sourced under the Apache 2.0 license and is available through several deployment options:

In addition to the model, the project offers detailed documentation, replay data, and evaluation tools to facilitate experimentation and reproducibility.

Conclusion

UI-TARS-1.5 is a technically sound progression in the field of multimodal AI agents, particularly those focused on GUI control and grounded visual reasoning. Through a combination of vision-language integration, memory mechanisms, and structured action planning, the model demonstrates strong performance across a diverse set of interactive environments.

Rather than pursuing universal generality, the model is tuned for task-oriented multimodal reasoning—targeting the real-world challenge of interacting with software through visual understanding. Its open-source release provides a practical framework for researchers and developers interested in exploring native agent interfaces or automating interactive systems through language and vision.


Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

[Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

The post ByteDance Releases UI-TARS-1.5: An Open-Source Multimodal AI Agent Built upon a Powerful Vision-Language Model appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

UI-TARS-1.5 字节跳动 多模态 GUI交互
相关文章