| license: apache-2.0 | |
| # VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding | |
| Jiapeng Shi, [Junke Wang](https://wdrink.github.io/), [Zuyao You](https://scholar.google.com/citations?hl=en&user=X8Kh8uoAAAAJ), [Bo He](https://boheumd.github.io/), [Zuxuan Wu<sup>✉</sup>](https://zxwu.azurewebsites.net/) | |
| [\[π Paper\]](https://arxiv.org/abs/2601.07290) [\[π₯ Model\]](https://huggingface.co/collections/JPShi/videoloom) | |
| ## π Overview | |
| This paper presents **VideoLoom**, a unified Video Large Language Model (Video LLM) for joint spatial-temporal understanding. To facilitate the development of fine-grained spatial and temporal localization capabilities, we curate **LoomData-8.7k**, a human-centric video dataset with temporally grounded and spatially localized captions. With this, VideoLoom achieves state-of-the-art or highly competitive performance across a variety of spatial and temporal benchmarks (e.g., 63.1 J&F on ReVOS for referring video object segmentation, and 48.3 R1@0.7 on Charades-STA for temporal grounding). In addition, we introduce **LoomBench**, a novel benchmark consisting of temporal, spatial, and compositional video-question pairs, enabling a comprehensive evaluation of Video LLMs from diverse aspects. Collectively, these contributions offer a universal and effective suite for joint spatial-temporal video understanding, setting a new standard in multimodal intelligence. | |
|  |