--- license: apache-2.0 library_name: transformers pipeline_tag: video-text-to-text --- # VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding Jiapeng Shi, [Junke Wang](https://wdrink.github.io/), [Zuyao You](https://scholar.google.com/citations?hl=en&user=X8Kh8uoAAAAJ), [Bo He](https://boheumd.github.io/), [Zuxuan Wu](https://zxwu.azurewebsites.net/) [\[📜 Paper\]](https://arxiv.org/abs/2601.07290) [\[💻 Code\]](https://github.com/JPShi12/VideoLoom) [\[📥 Model\]](https://huggingface.co/collections/JPShi/videoloom) ## 🔎 Overview This paper presents **VideoLoom**, a unified Video Large Language Model (Video LLM) for joint spatial-temporal understanding. To facilitate the development of fine-grained spatial and temporal localization capabilities, we curate **LoomData-8.7k**, a human-centric video dataset with temporally grounded and spatially localized captions. With this, VideoLoom achieves state-of-the-art or highly competitive performance across a variety of spatial and temporal benchmarks (e.g., 63.1 J&F on ReVOS for referring video object segmentation, and 48.3 R1@0.7 on Charades-STA for temporal grounding). In addition, we introduce **LoomBench**, a novel benchmark consisting of temporal, spatial, and compositional video-question pairs, enabling a comprehensive evaluation of Video LLMs from diverse aspects. Collectively, these contributions offer a universal and effective suite for joint spatial-temporal video understanding, setting a new standard in multimodal intelligence. ![Model](assets/model.jpg) ## 📜 Citation If you find our work helpful, please consider giving a star ⭐ and citation 📝 ```bibtex @article{shi2026videoloom, title={VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding}, author={Shi, Jiapeng and Wang, Junke and You, Zuyao and He, Bo and Wu, Zuxuan}, journal={arXiv preprint arXiv:2601.07290}, year={2026} } ``` ## 🤝 Acknowledgements We refer to [Sa2VA](https://github.com/bytedance/Sa2VA) and [TimeChat](https://github.com/RenShuhuai-Andy/TimeChat) to build our codebase. Thanks for their wonderful project.