Zhiding commited on
Commit
bcc35bf
·
verified ·
1 Parent(s): 1af5963

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -5
README.md CHANGED
@@ -1,5 +1,72 @@
1
- ---
2
- license: other
3
- license_name: nsclv1
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: nsclv1
4
+ license_link: LICENSE
5
+ pipeline_tag: image-text-to-text
6
+ library_name: transformers
7
+ base_model:
8
+ - Qwen/Qwen2-7B-Instruct
9
+ - google/siglip-so400m-patch14-384
10
+ base_model_relation: merge
11
+ language:
12
+ - multilingual
13
+ tags:
14
+ - VideoITG
15
+ - Eagle
16
+ - VLM
17
+ ---
18
+
19
+ # VideoITG-8B
20
+ [\[🌐Homepage\]](https://nvlabs.github.io/VideoITG/) [\[💻GitHub\]](https://github.com/NVlabs/VideoITG) [\[📜Tech Report\]](https://arxiv.org/abs/2507.13353)
21
+ [\[🤗VideoITG-40K\]](https://huggingface.co/datasets/NVEagle/VideoITG-40K)
22
+
23
+ ## Introduction
24
+
25
+ VideoITG-8B is a multimodal video understanding model trained with instructed temporal grounding, equipped with the ability to enhance Video Large Language Models through intelligent frame selection. The model tackles the complexities of real-world video scenarios by aligning frame sampling with user instructions. Please check our paper for more details.
26
+
27
+ ## Model Details
28
+
29
+ - **Model name**: VideoITG-8B
30
+ - **Architecture**: Customized Eagle-8B base model, fine-tuned with Instructed Temporal Grounding
31
+ - **Model type**: Multimodal Large Language Model with Video Understanding
32
+ - **Languages**: English (primary), multilingual (partially)
33
+
34
+ ## Model Performance
35
+
36
+ | Model | Base Model | Frames | LongVideoBench | MLVU | VideoMME | CG-Bench |
37
+ |---------------------|-------------------|--------|----------------|------|----------|----------|
38
+ | VideoITG-7B | InternVL2.5-8B | 32 | 61.9 (+2.9%) | 75.0 (+7.8%) | 67.3 (+4.0%) | 46.7 (+7.0%) |
39
+ | VideoITG-7B | InternVL2.5-26B | 32 | 63.0 (+1.0%) | 78.9 (+6.1%) | 69.9 (+2.5%) | 48.7 (+6.0%) |
40
+ | VideoITG-7B | LLaVA-Video-7B | 32 | 61.6 (+3.6%) | 74.6 (+8.6%) | 66.1 (+3.0%) | 42.8 (+9.0%) |
41
+ | VideoITG-7B | LLaVA-Video-7B | 64 | 60.9 (+7.4%) | 76.3 (+7.6%) | 66.4 (+1.9%) | 42.9 (+8.1%) |
42
+
43
+ ## Key Features
44
+
45
+ - **Instructed Temporal Grounding**: Intelligently selects video frames based on user instructions
46
+ - **Plug-and-Play**: Seamlessly integrates with existing video language models
47
+ - **Superior Temporal Understanding**: Excels in tasks requiring precise temporal grounding
48
+
49
+
50
+ ## License
51
+
52
+ - Code: [Apache 2.0 License](LICENSE)
53
+ - Model: [NVIDIA License](LICENSE_Model) - Research preview for non-commercial use only
54
+
55
+ ## Citation
56
+
57
+ If you find this project useful, please cite our work:
58
+
59
+ ```bibtex
60
+ @article{wang2025videoitg,
61
+ title = {VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding},
62
+ author = {Shihao Wang and Guo Chen and De-An Huang and Zhiqi Li and Minghan Li and Guilin Liu and Jose M. Alvarez and Lei Zhang and Zhiding Yu},
63
+ journal = {arXiv preprint arXiv:2507.13353},
64
+ year = {2025}
65
+ }
66
+ ```
67
+
68
+ ## Acknowledgement
69
+
70
+ - [Eagle](https://github.com/NVlabs/EAGLE): The codebase we built upon
71
+ - [LMMs-Eval](https://github.com/EvolvingLMMs-Lab/lmms-eval): Many thanks to the LMMs-Lab for the easy-to-use evaluation tools
72
+ - [LLaVA-OneVision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data) and [LLaVA-Video](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K): We train our models with data from these great open-source projects