D2VLM: Factorized Learning for Temporally Grounded Video-Language Models
This repository contains the pre-trained D2VLM models introduced in the paper Factorized Learning for Temporally Grounded Video-Language Models, accepted at ICCV 2025.
D2VLM is a framework that decouples the learning of temporal grounding and textual response in video-language models while emphasizing their inherent dependency. It introduces a "grounding then answering with evidence referencing" paradigm and uses a Factorized Preference Optimization (FPO) algorithm to improve event-level perception.
Performance
The performance on the E.T. Bench is shown below.
| Model Name | Referring (Acc) | Grounding (F1) | Dense Captioning (F1) | Dense Captioning (Sim) | Complex (Recall) |
|---|---|---|---|---|---|
| D2VLM | 25.3 | 42.3 | 37.5 | 21.8 | 18.1 |
| D2VLM_mcqa_enhanced | 38.3 | 44.3 | 37.2 | 21.4 | 18.6 |
Some Notes
For the Referring tasks of E.T.Bench (RAR/EVC/RVQ), we adopt a more stringent evaluation protocol compared with the original E.T. Bench, which usually results in lower metric values (e.g., a drop of more than 10% for some existing methods when using our stringent metrics).
To enhance basic instruction-following capability, we incorporate automatically constructed multiple-choice questions during the proposed factorized preference optimization process. Due to our proposed factorized preference data synthesis, we can easily generate diverse distractor options based on different causes of failure and combine them with the original correct answer to form multiple-choice questions, without requiring additional external data sources. We define the resulting model as "D2VLM_mcqa_enhanced".
Citation
If you find our work useful in your research, please consider citing our paper:
@inproceedings{d2vlm,
title={Factorized Learning for Temporally Grounded Video-Language Models},
author={Zeng, Wenzheng and Gao, Difei and Shou, Mike Zheng and Ng, Hwee Tou},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2025},
pages={20683-20693}
}
Acknowledgments
This project was built upon E.T. Bench, TimeChat, and AMP.