MarkTechPost@AI 04月10日
T* and LV-Haystack: A Spatially-Guided Temporal Search Framework for Efficient Long-Form Video Understanding
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了一种名为T的新框架,用于高效理解长视频内容,尤其是在计算机视觉领域。研究人员重新审视了时间搜索方法,并将其转化为空间问题,通过自适应缩放技术来定位关键帧。为了评估该框架,他们引入了LV-HAYSTACK基准,其中包含大量真实世界的视频数据和标注。实验结果表明,T框架显著提高了现有视觉语言模型的性能,同时降低了计算成本。这项研究为处理长视频理解问题提供了新的思路和方法。

🔍 长视频理解是计算机视觉的重大挑战,关键在于从海量帧中高效找到相关帧。现有视觉语言模型(VLMs)在处理长视频时计算成本高昂。

💡 研究提出了T框架,将时间搜索转化为空间问题,利用自适应缩放技术。该框架包含问题定位、迭代时间搜索和任务完成三个阶段。

📊 T框架在LV-HAYSTACK等多个数据集上进行了评估,结果显示其性能显著提升,尤其是在长视频和帧数受限的情况下。T框架提高了GPT-4o和LLaVA-OV等模型的准确性,同时降低了计算成本。

📈 LV-HAYSTACK是一个包含480小时真实视频和超过15000个标注QA实例的大型基准,用于评估长视频理解模型。T框架在LV-HAYSTACK上表现出色,证明了其有效性。

🚀 T框架通过关注关键帧选择和精细化帧检索,为理解长视频内容提供了更高效的方法。实验表明,T框架在各种评估基准上显著优于均匀采样和基于检索的采样方法。

Understanding long-form videos—ranging from minutes to hours—presents a major challenge in computer vision, especially as video understanding tasks expand beyond short clips. One of the key difficulties lies in efficiently identifying the few relevant frames from thousands within a lengthy video necessary to answer a given query. Most VLMs, such as LLaVA and Tarsier, process hundreds of tokens per image, making frame-by-frame analysis of long videos computationally expensive. To address this, a new paradigm known as temporal search has gained prominence. Unlike traditional temporal localization, which typically identifies continuous segments within a video, temporal search aims to retrieve a sparse set of highly relevant frames dispersed across the entire timeline—akin to finding a “needle in a haystack.”

While advancements in attention mechanisms and video transformers have improved temporal modeling, these methods still face limitations in capturing long-range dependencies. Some approaches attempt to overcome this by compressing video data or selecting specific frames to reduce the input size. Although benchmarks for long-video understanding exist, they mostly evaluate performance based on downstream question-answering tasks rather than directly assessing the effectiveness of temporal search. In contrast, the emerging focus on keyframe selection and fine-grained frame retrieval—ranging from glance-based to caption-guided methods—offers a more targeted and efficient approach to understanding long-form video content.

Stanford, Northwestern, and Carnegie Mellon researchers revisited temporal search for long-form video understanding, introducing LV-HAYSTACK—a large benchmark with 480 hours of real-world videos and over 15,000 annotated QA instances. They frame the task as finding a few key frames from thousands, highlighting the limitations of current models. To address this, they propose T, a framework that reimagines temporal search as a spatial search using adaptive zoom-in techniques across time and space. T significantly boosts performance while reducing computational cost, improving the accuracy of models like GPT-4o and LLaVA-OV using far fewer frames.

The study introduces a Temporal Search (TS) task to enhance video understanding in long-context visual language models. The goal is to select a minimal keyframe from a video that retains all information necessary to answer a given question. The proposed T framework performs this using three stages: question grounding, iterative temporal search, and task completion. It identifies relevant objects in the question, locates them across frames using a spatial search model, and updates a frame sampling strategy based on confidence scores. Evaluated on the LV-HAYSTACK benchmark, T shows improved efficiency and accuracy with significantly lower computational costs.

The study evaluates the proposed T temporal search framework across multiple datasets and tasks, including LV-HAYSTACK, LongVideoBench, VideoMME, NExT-QA, EgoSchema, and Ego4D LongVideo QA. T is integrated into open-source and proprietary vision-language models, consistently improving performance, especially in long videos and limited frame scenarios. It uses attention, object detection, or trained models for efficient keyframe selection, achieving high accuracy with reduced computational cost. Experiments show that T progressively aligns sampling with relevant frames over iterations, approaches human-level performance with more frames, and significantly outperforms uniform and retrieval-based sampling methods across various evaluation benchmarks.

In conclusion, the work tackles the challenge of understanding long-form videos by revisiting temporal search methods used in state-of-the-art VLMs. The authors frame the task as the “Long Video Haystack” problem—identifying a few relevant frames from tens of thousands. They introduce LV-HAYSTACK, a benchmark with 480 hours of video and over 15,000 human-annotated instances to support this. Findings show existing methods perform poorly. They propose T, a lightweight framework that transforms temporal search into a spatial problem using adaptive zooming techniques to address this. T significantly boosts the performance of leading VLMs under tight frame budgets, demonstrating its effectiveness.


Check out the Paper and Project Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

[Register Now] miniCON Virtual Conference on OPEN SOURCE AI: FREE REGISTRATION + Certificate of Attendance + 3 Hour Short Event (April 12, 9 am- 12 pm PST) + Hands on Workshop [Sponsored]

The post T* and LV-Haystack: A Spatially-Guided Temporal Search Framework for Efficient Long-Form Video Understanding appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

长视频理解 时间搜索 T框架 LV-HAYSTACK 计算机视觉
相关文章