MarkTechPost@AI 04月12日
This AI Paper from Salesforce Introduces VLM2VEC and MMEB: A Contrastive Framework and Benchmark for Universal Multimodal Embeddings
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

VLM2VEC是Salesforce Research和滑铁卢大学研究人员联合推出的一项研究成果,它构建了一个通用多模态嵌入框架,并配套了名为MMEB的综合基准。该模型通过对比训练,将视觉和文本数据融合到统一的表征空间,从而提升了模型在不同任务和数据集上的泛化能力。研究表明,VLM2VEC在分类、视觉问答、检索和视觉定位等任务上均表现出色,尤其是在零样本测试中,展现出显著的性能优势,为可扩展、自适应的多模态AI发展提供了新思路。

🖼️ 多模态嵌入是将视觉和文本数据整合到单一表征空间的技术,VLM2VEC旨在提升模型在各种任务上的泛化能力,解决现有模型难以跨任务和模态有效泛化的问题。

📊 研究团队构建了名为MMEB的综合基准,包含36个数据集,涵盖分类、视觉问答、检索和视觉定位四大任务,其中一部分用于训练,一部分用于评估,包括分布外任务,为模型评估提供了统一标准。

💡 VLM2VEC框架通过对比训练将视觉语言模型转化为嵌入模型,使用Phi-3.5-V和LLaVA-1.6等模型作为骨干,并结合任务特定的指令,使其能够处理文本和图像的任意组合,并通过InfoNCE损失函数进行训练,对齐匹配的查询-目标对。

🚀 实验结果表明,VLM2VEC在所有任务类别中均取得超过50%的评分,优于现有基线模型,特别是在零样本测试中表现出色,展现了其强大的泛化能力,LoRA调优策略也被证明是有效的。

Multimodal embeddings combine visual and textual data into a single representational space, enabling systems to understand and relate images and language meaningfully. These embeddings support various tasks, including visual question answering, retrieval, classification, and grounding. The technology is especially important for AI models that interpret real-world content through visual and linguistic lenses, such as document analysis, digital assistants, or visual search engines.

A pressing challenge has been the inability of current models to generalize across diverse tasks and modalities effectively. Most models are trained for highly specific tasks or underperform when applied to unfamiliar datasets. Furthermore, without a broad and unified benchmark, evaluating performance across multimodal tasks becomes inconsistent and fragmented. This limits the models’ capability to handle the variety of functions required in realistic, cross-domain applications, especially when new data distributions are introduced.

Several tools, such as CLIP, BLIP, and SigLIP, have been proposed for generating visual-textual embeddings. These models typically use separate encoders for images and text, merging their outputs through simple operations like score-level fusion. While these approaches offer baseline utility, they suffer from limited cross-modal reasoning and generalization ability. Their performance in zero-shot conditions tends to decline due to shallow fusion strategies and the lack of task-specific instruction handling during training.

In a collaboration between researchers from Salesforce Research and the University of Waterloo, a new model called VLM2VEC was introduced alongside a comprehensive benchmark named MMEB. MMEB comprises 36 datasets across four major tasks: classification, visual question answering, retrieval, and visual grounding. It divides datasets into 20 used for training and 16 for evaluation, including out-of-distribution tasks. The VLM2VEC framework is designed to convert any vision-language model into an embedding model using contrastive training. It allows it to handle any input combination of text and images while following task instructions.

To build VLM2VEC, the research team used backbone models such as Phi-3.5-V and LLaVA-1.6. The method begins by constructing task-specific instruction-based queries and targets, processed through a vision-language model to generate embeddings. Contrastive training is employed using the InfoNCE loss function with cosine similarity, aligning embeddings by maximizing the similarity between matching query-target pairs while minimizing it for mismatches. To support large batch sizes, critical for training with diverse negatives, the researchers used GradCache, which splits batches into memory-manageable sub-batches and accumulates gradients. This process ensures efficient training even with the high memory demands of multimodal inputs. Task-specific instructions are embedded within the training pipeline to help the model adapt its encoding to the nature of the task, such as grounding or retrieval, further boosting its generalization capabilities.

Performance results demonstrate the advantage of the proposed method. The best-performing version of VLM2VEC used LLaVA-1.6 as its backbone, applied LoRA tuning, and processed images at 1344 × 1344 resolution. This configuration achieved a Precision@1 score of 62.9% across all 36 MMEB datasets. In zero-shot tests on the 16 out-of-distribution datasets, it maintained a strong 57.1% score. Compared to the best-performing baseline model without fine-tuning, which scored 44.7%, VLM2VEC showed an 18.2-point improvement. Compared to the top fine-tuned baseline at 47.2%, the improvement was 15.7 points. Across all task categories—classification, VQA, retrieval, and grounding—the model consistently scored above 50%, a level of performance not matched by any baseline. The results also indicate that LoRA-tuned variants outperformed those trained with full fine-tuning, showing that parameter-efficient training strategies can deliver higher accuracy.

The research clearly outlines a solution to the problem of task-specific multimodal embedding tools that lack generalization. By combining a well-structured training framework and a robust benchmark, the study demonstrates a universal embedding model that handles varied tasks effectively using contrastive training and instruction-following. This development marks a meaningful step forward in scalable, adaptable multimodal AI.


Check out Paper and Project. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

The post This AI Paper from Salesforce Introduces VLM2VEC and MMEB: A Contrastive Framework and Benchmark for Universal Multimodal Embeddings appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

VLM2VEC 多模态嵌入 MMEB 对比学习 AI
相关文章