Add metadata, GitHub link and BibTeX citation

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +21 -2
README.md CHANGED
@@ -1,15 +1,34 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
4
 
5
  # VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding
6
 
7
  Jiapeng Shi, [Junke Wang](https://wdrink.github.io/), [Zuyao You](https://scholar.google.com/citations?hl=en&user=X8Kh8uoAAAAJ), [Bo He](https://boheumd.github.io/), [Zuxuan Wu<sup>&#9993;</sup>](https://zxwu.azurewebsites.net/)
8
 
9
- [\[πŸ“œ Paper\]](https://arxiv.org/abs/2601.07290) [\[πŸ“₯ Model\]](https://huggingface.co/collections/JPShi/videoloom)
10
 
11
  ## πŸ”Ž Overview
12
 
13
  This paper presents **VideoLoom**, a unified Video Large Language Model (Video LLM) for joint spatial-temporal understanding. To facilitate the development of fine-grained spatial and temporal localization capabilities, we curate **LoomData-8.7k**, a human-centric video dataset with temporally grounded and spatially localized captions. With this, VideoLoom achieves state-of-the-art or highly competitive performance across a variety of spatial and temporal benchmarks (e.g., 63.1 J&F on ReVOS for referring video object segmentation, and 48.3 R1@0.7 on Charades-STA for temporal grounding). In addition, we introduce **LoomBench**, a novel benchmark consisting of temporal, spatial, and compositional video-question pairs, enabling a comprehensive evaluation of Video LLMs from diverse aspects. Collectively, these contributions offer a universal and effective suite for joint spatial-temporal video understanding, setting a new standard in multimodal intelligence.
14
 
15
- ![Model](assets/model.jpg)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: video-text-to-text
5
  ---
6
 
7
  # VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding
8
 
9
  Jiapeng Shi, [Junke Wang](https://wdrink.github.io/), [Zuyao You](https://scholar.google.com/citations?hl=en&user=X8Kh8uoAAAAJ), [Bo He](https://boheumd.github.io/), [Zuxuan Wu<sup>&#9993;</sup>](https://zxwu.azurewebsites.net/)
10
 
11
+ [\[πŸ“œ Paper\]](https://arxiv.org/abs/2601.07290) [\[πŸ’» Code\]](https://github.com/JPShi12/VideoLoom) [\[πŸ“₯ Model\]](https://huggingface.co/collections/JPShi/videoloom)
12
 
13
  ## πŸ”Ž Overview
14
 
15
  This paper presents **VideoLoom**, a unified Video Large Language Model (Video LLM) for joint spatial-temporal understanding. To facilitate the development of fine-grained spatial and temporal localization capabilities, we curate **LoomData-8.7k**, a human-centric video dataset with temporally grounded and spatially localized captions. With this, VideoLoom achieves state-of-the-art or highly competitive performance across a variety of spatial and temporal benchmarks (e.g., 63.1 J&F on ReVOS for referring video object segmentation, and 48.3 R1@0.7 on Charades-STA for temporal grounding). In addition, we introduce **LoomBench**, a novel benchmark consisting of temporal, spatial, and compositional video-question pairs, enabling a comprehensive evaluation of Video LLMs from diverse aspects. Collectively, these contributions offer a universal and effective suite for joint spatial-temporal video understanding, setting a new standard in multimodal intelligence.
16
 
17
+ ![Model](assets/model.jpg)
18
+
19
+ ## πŸ“œ Citation
20
+
21
+ If you find our work helpful, please consider giving a star ⭐ and citation πŸ“
22
+
23
+ ```bibtex
24
+ @article{shi2026videoloom,
25
+ title={VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding},
26
+ author={Shi, Jiapeng and Wang, Junke and You, Zuyao and He, Bo and Wu, Zuxuan},
27
+ journal={arXiv preprint arXiv:2601.07290},
28
+ year={2026}
29
+ }
30
+ ```
31
+
32
+ ## 🀝 Acknowledgements
33
+
34
+ We refer to [Sa2VA](https://github.com/bytedance/Sa2VA) and [TimeChat](https://github.com/RenShuhuai-Andy/TimeChat) to build our codebase. Thanks for their wonderful project.