| <p align="center"> | |
| <h1 align="center"> ✏️ Data for VidChain Excercise</h1> | |
| <h2 align="center">VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning</h2> | |
| <p align="center">Ji Soo Lee*, Jongha Kim*, Jeehye Na, Jinyoung Park, Hyunwoo J. Kim†. | |
| </p> | |
| <h2 align="center"> | |
| AAAI 2025 | |
| </h2> | |
| <h3 align="center"> | |
| <a href="https://arxiv.org/pdf/2501.06761" target='_blank'><img src="https://img.shields.io/badge/arXiv-2501.06761-b31b1b.svg"></a> | |
| <a href="https://huggingface.co/datasets/simplecloud/VidChain-Data"><img src="https://img.shields.io/badge/huggingface-datasets-yellow"></a> | |
| </h3> | |
| <div align="center"> | |
| <img src="asset/main.png" width="750px" /> | |
| </div> | |
| ## 🎯 Learning Objectives | |
| By working through this exercise, you will: | |
| - Reproduce baseline behavior of a video-language model (**VTimeLLM**, CVPR 2024 Highlight). | |
| - Observe the limitations of existing approaches in temporal reasoning and coherence. | |
| - Implement and experiment with **VidChain's improvements** using M-DPO. | |
| - Run inference on videos to generate **dense temporal captions (Dense Video Captioning)**. | |
| - Evaluate how preference alignment improves performance over baselines. | |
| - Discuss potential strategies for ensembling different reasoning paths of VidChain's CoTasks. | |
| <br> | |
| ## Citations 🌱 | |
| ``` | |
| @inproceedings{lee2025vidchain, | |
| title={VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning}, | |
| author={Lee, Ji Soo and Kim, Jongha and Na, Jeehye and Park, Jinyoung and Kim, Hyunwoo J}, | |
| booktitle={AAAI}, | |
| year={2025} | |
| } | |
| ``` | |