🎥 VidChain Exercise: Chain-of-Tasks with Metric-based Direct Preference Optimization
📚 About This Repository
This repository contains the exercise materials and implementation for VidChain, a novel framework for Dense Video Captioning with VideoLLMs. VidChain combines Chain-of-Tasks (CoTasks) and Metric-based Direct Preference Optimization (M-DPO) to achieve superior temporal reasoning and coherence in video understanding.
🎯 Research Paper
VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning
Ji Soo Lee, Jongha Kim*, Jeehye Na, Jinyoung Park, Hyunwoo J. Kim†
AAAI 2025
🚀 Learning Objectives
By working through this exercise, you will:
- ✅ Reproduce baseline behavior of a video-language model (VTimeLLM, CVPR 2024 Highlight)
- 🔍 Observe limitations of existing approaches in temporal reasoning and coherence
- 🛠️ Implement and experiment with VidChain's improvements using M-DPO
- 🎬 Run inference on videos to generate dense temporal captions (Dense Video Captioning)
- 📊 Evaluate how preference alignment improves performance over baselines
- 💡 Discuss strategies for ensembling different reasoning paths of VidChain's CoTasks
📁 Repository Structure
VidChain-exercise/
├── README_HF.md # This file - Hugging Face README
├── READ.md # Original exercise README
├── upload.py # Upload script for Hugging Face Hub
├── upload_single_file.py # Single file upload utility
├── remove_file.py # File removal utility
├── setup_hf_upload.py # Setup script for HF upload
├── HF_UPLOAD_GUIDE.md # Comprehensive upload guide
├── requirements.txt # Python dependencies
├── app.py # Streamlit app for HF Spaces
├── asset/ # Project assets and images
│ └── main.png # Main framework diagram
└── VTimeLLM/ # VideoLLM implementation
└── ... # VTimeLLM source code
🔧 Quick Start
1. Install Dependencies
pip install -r requirements.txt
2. Setup for VideoLLaMA2
# Clone the main repository
git clone https://github.com/mlvlab/VidChain.git
cd VidChain
# Install VideoLLaMA2 dependencies
conda create -n videollama python=3.10 -y
conda activate videollama
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
cd VideoLLaMA2
pip install -r requirements.txt
pip install num2words datasets pycocoevalcap rich
pip install flash-attn==2.5.7 --no-build-isolation
3. Download Pre-trained Models
- VideoLLaMA2 checkpoints: Download from official repo
- VidChain checkpoints: Download from Hugging Face
🎯 Key Features
Chain-of-Tasks (CoTasks)
- Novel approach to video understanding through task decomposition
- Improves temporal reasoning capabilities
- Enhanced coherence in video captioning
Metric-based Direct Preference Optimization (M-DPO)
- Advanced training methodology for preference alignment
- Better performance over traditional baselines
- Improved temporal consistency
VideoLLaMA2 Integration
- State-of-the-art video-language model
- Support for ActivityNet and YouCook2 datasets
- Pre-extracted features for efficient training
📊 Dataset Support
This exercise supports two major datasets:
- ActivityNet (301GB pre-extracted features)
- YouCook2 (32GB pre-extracted features)
⚠️ Storage Warning: The pre-extracted features require significant storage space. Please ensure you have adequate disk space.
🔗 Related Resources
- Paper: VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning
- Main Repository: mlvlab/VidChain
- Dataset: simplecloud/VidChain-Data
- VideoLLaMA2: DAMO-NLP-SG/VideoLLaMA2
📝 Citation
If you find this work useful, please cite:
@inproceedings{lee2025vidchain,
title={VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning},
author={Lee, Ji Soo and Kim, Jongha and Na, Jeehye and Park, Jinyoung and Kim, Hyunwoo J},
booktitle={AAAI},
year={2025}
}
🤝 Contributing
This is an exercise repository for educational purposes. For contributions to the main VidChain project, please visit the main repository.
📄 License
This project is released under the same license as the main VidChain repository. Please refer to the main repository for license details.
Built with ❤️ for the AI research community
Part of the VidChain research project at AAAI 2025