VideoITG-8B / README.md
Zhiding's picture
Update README.md
18ab80c verified
---
license: other
license_name: nvlicense
license_link: LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- Qwen/Qwen2-7B-Instruct
- google/siglip-so400m-patch14-384
base_model_relation: merge
language:
- multilingual
tags:
- VideoITG
- Eagle
- VLM
---
# VideoITG-8B
[\[🌐Homepage\]](https://nvlabs.github.io/VideoITG/) [\[💻GitHub\]](https://github.com/NVlabs/VideoITG) [\[📜Tech Report\]](https://arxiv.org/abs/2507.13353)
[\[🤗VideoITG-40K\]](https://huggingface.co/datasets/NVEagle/VideoITG-40K)
## Introduction
VideoITG-8B is a multimodal video understanding model trained with instructed temporal grounding, equipped with the ability to enhance Video Large Language Models through intelligent frame selection. The model tackles the complexities of real-world video scenarios by aligning frame sampling with user instructions. Please check our paper for more details.
## Model Details
- **Model name**: VideoITG-8B
- **Architecture**: Customized Eagle-8B base model, fine-tuned with Instructed Temporal Grounding
- **Model type**: Multimodal Large Language Model with Video Understanding
- **Languages**: English (primary), multilingual (partially)
## Model Performance
| Model | Base Model | Frames | LongVideoBench | MLVU | VideoMME | CG-Bench |
|---------------------|-------------------|--------|----------------|------|----------|----------|
| VideoITG-7B | InternVL2.5-8B | 32 | 61.9 (+2.9%) | 75.0 (+7.8%) | 67.3 (+4.0%) | 46.7 (+7.0%) |
| VideoITG-7B | InternVL2.5-26B | 32 | 63.0 (+1.0%) | 78.9 (+6.1%) | 69.9 (+2.5%) | 48.7 (+6.0%) |
| VideoITG-7B | LLaVA-Video-7B | 32 | 61.6 (+3.6%) | 74.6 (+8.6%) | 66.1 (+3.0%) | 42.8 (+9.0%) |
| VideoITG-7B | LLaVA-Video-7B | 64 | 60.9 (+7.4%) | 76.3 (+7.6%) | 66.4 (+1.9%) | 42.9 (+8.1%) |
## Key Features
- **Instructed Temporal Grounding**: Intelligently selects video frames based on user instructions
- **Plug-and-Play**: Seamlessly integrates with existing video language models
- **Superior Temporal Understanding**: Excels in tasks requiring precise temporal grounding
## License
- Code: [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0)
- Model: [NVIDIA License](LICENSE) - Research preview for non-commercial use only
## Citation
If you find this project useful, please cite our work:
```bibtex
@article{wang2025videoitg,
title = {VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding},
author = {Shihao Wang and Guo Chen and De-An Huang and Zhiqi Li and Minghan Li and Guilin Liu and Jose M. Alvarez and Lei Zhang and Zhiding Yu},
journal = {arXiv preprint arXiv:2507.13353},
year = {2025}
}
```
## Acknowledgement
- [Eagle](https://github.com/NVlabs/EAGLE): The codebase we built upon
- [LMMs-Eval](https://github.com/EvolvingLMMs-Lab/lmms-eval): Many thanks to the LMMs-Lab for the easy-to-use evaluation tools
- [LLaVA-OneVision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data) and [LLaVA-Video](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K): We train our models with data from these great open-source projects