File size: 1,643 Bytes
c4b9a65 4836070 c4b9a65 4836070 c4b9a65 4836070 c4b9a65 4836070 c4b9a65 4836070 c4b9a65 4836070 c4b9a65 4836070 c4b9a65 4836070 c4b9a65 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
<p align="center">
<h1 align="center"> ✏️ Data for VidChain Excercise</h1>
<h2 align="center">VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning</h2>
<p align="center">Ji Soo Lee*, Jongha Kim*, Jeehye Na, Jinyoung Park, Hyunwoo J. Kim†.
</p>
<h2 align="center">
AAAI 2025
</h2>
<h3 align="center">
<a href="https://arxiv.org/pdf/2501.06761" target='_blank'><img src="https://img.shields.io/badge/arXiv-2501.06761-b31b1b.svg"></a>
<a href="https://huggingface.co/datasets/simplecloud/VidChain-Data"><img src="https://img.shields.io/badge/huggingface-datasets-yellow"></a>
</h3>
<div align="center">
<img src="asset/main.png" width="750px" />
</div>
## 🎯 Learning Objectives
By working through this exercise, you will:
- Reproduce baseline behavior of a video-language model (**VTimeLLM**, CVPR 2024 Highlight).
- Observe the limitations of existing approaches in temporal reasoning and coherence.
- Implement and experiment with **VidChain's improvements** using M-DPO.
- Run inference on videos to generate **dense temporal captions (Dense Video Captioning)**.
- Evaluate how preference alignment improves performance over baselines.
- Discuss potential strategies for ensembling different reasoning paths of VidChain's CoTasks.
<br>
## Citations 🌱
```
@inproceedings{lee2025vidchain,
title={VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning},
author={Lee, Ji Soo and Kim, Jongha and Na, Jeehye and Park, Jinyoung and Kim, Hyunwoo J},
booktitle={AAAI},
year={2025}
}
```
|