Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,3 +1,139 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🎥 VidChain Exercise: Chain-of-Tasks with Metric-based Direct Preference Optimization
|
| 2 |
+
|
| 3 |
+
<p align="center">
|
| 4 |
+
<img src="https://img.shields.io/badge/AAAI-2025-blue" alt="AAAI 2025">
|
| 5 |
+
<a href="https://arxiv.org/pdf/2501.06761" target='_blank'><img src="https://img.shields.io/badge/arXiv-2501.06761-b31b1b.svg" alt="arXiv"></a>
|
| 6 |
+
<a href="https://huggingface.co/datasets/simplecloud/VidChain-Data"><img src="https://img.shields.io/badge/huggingface-datasets-yellow" alt="Hugging Face Dataset"></a>
|
| 7 |
+
</p>
|
| 8 |
+
|
| 9 |
+
<div align="center">
|
| 10 |
+
<img src="asset/main.png" width="750px" alt="VidChain Framework" />
|
| 11 |
+
</div>
|
| 12 |
+
|
| 13 |
+
## 📚 About This Repository
|
| 14 |
+
|
| 15 |
+
This repository contains the exercise materials and implementation for **VidChain**, a novel framework for Dense Video Captioning with VideoLLMs. VidChain combines Chain-of-Tasks (CoTasks) and Metric-based Direct Preference Optimization (M-DPO) to achieve superior temporal reasoning and coherence in video understanding.
|
| 16 |
+
|
| 17 |
+
### 🎯 Research Paper
|
| 18 |
+
**VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning**
|
| 19 |
+
*Ji Soo Lee*, Jongha Kim*, Jeehye Na, Jinyoung Park, Hyunwoo J. Kim†
|
| 20 |
+
AAAI 2025
|
| 21 |
+
|
| 22 |
+
## 🚀 Learning Objectives
|
| 23 |
+
|
| 24 |
+
By working through this exercise, you will:
|
| 25 |
+
|
| 26 |
+
- ✅ **Reproduce baseline behavior** of a video-language model (**VTimeLLM**, CVPR 2024 Highlight)
|
| 27 |
+
- 🔍 **Observe limitations** of existing approaches in temporal reasoning and coherence
|
| 28 |
+
- 🛠️ **Implement and experiment** with VidChain's improvements using M-DPO
|
| 29 |
+
- 🎬 **Run inference** on videos to generate dense temporal captions (Dense Video Captioning)
|
| 30 |
+
- 📊 **Evaluate** how preference alignment improves performance over baselines
|
| 31 |
+
- 💡 **Discuss strategies** for ensembling different reasoning paths of VidChain's CoTasks
|
| 32 |
+
|
| 33 |
+
## 📁 Repository Structure
|
| 34 |
+
|
| 35 |
+
```
|
| 36 |
+
VidChain-exercise/
|
| 37 |
+
├── README_HF.md # This file - Hugging Face README
|
| 38 |
+
├── READ.md # Original exercise README
|
| 39 |
+
├── upload.py # Upload script for Hugging Face Hub
|
| 40 |
+
├── upload_single_file.py # Single file upload utility
|
| 41 |
+
├── remove_file.py # File removal utility
|
| 42 |
+
├── setup_hf_upload.py # Setup script for HF upload
|
| 43 |
+
├── HF_UPLOAD_GUIDE.md # Comprehensive upload guide
|
| 44 |
+
├── requirements.txt # Python dependencies
|
| 45 |
+
├── app.py # Streamlit app for HF Spaces
|
| 46 |
+
├── asset/ # Project assets and images
|
| 47 |
+
│ └── main.png # Main framework diagram
|
| 48 |
+
└── VTimeLLM/ # VideoLLM implementation
|
| 49 |
+
└── ... # VTimeLLM source code
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
## 🔧 Quick Start
|
| 53 |
+
|
| 54 |
+
### 1. Install Dependencies
|
| 55 |
+
```bash
|
| 56 |
+
pip install -r requirements.txt
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
### 2. Setup for VideoLLaMA2
|
| 60 |
+
```bash
|
| 61 |
+
# Clone the main repository
|
| 62 |
+
git clone https://github.com/mlvlab/VidChain.git
|
| 63 |
+
cd VidChain
|
| 64 |
+
|
| 65 |
+
# Install VideoLLaMA2 dependencies
|
| 66 |
+
conda create -n videollama python=3.10 -y
|
| 67 |
+
conda activate videollama
|
| 68 |
+
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
|
| 69 |
+
|
| 70 |
+
cd VideoLLaMA2
|
| 71 |
+
pip install -r requirements.txt
|
| 72 |
+
pip install num2words datasets pycocoevalcap rich
|
| 73 |
+
pip install flash-attn==2.5.7 --no-build-isolation
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
### 3. Download Pre-trained Models
|
| 77 |
+
- **VideoLLaMA2 checkpoints**: [Download from official repo](https://github.com/DAMO-NLP-SG/VideoLLaMA2?tab=readme-ov-file#earth_americas-model-zoo)
|
| 78 |
+
- **VidChain checkpoints**: [Download from Hugging Face](https://huggingface.co/datasets/simplecloud/VidChain-Data)
|
| 79 |
+
|
| 80 |
+
## 🎯 Key Features
|
| 81 |
+
|
| 82 |
+
### Chain-of-Tasks (CoTasks)
|
| 83 |
+
- Novel approach to video understanding through task decomposition
|
| 84 |
+
- Improves temporal reasoning capabilities
|
| 85 |
+
- Enhanced coherence in video captioning
|
| 86 |
+
|
| 87 |
+
### Metric-based Direct Preference Optimization (M-DPO)
|
| 88 |
+
- Advanced training methodology for preference alignment
|
| 89 |
+
- Better performance over traditional baselines
|
| 90 |
+
- Improved temporal consistency
|
| 91 |
+
|
| 92 |
+
### VideoLLaMA2 Integration
|
| 93 |
+
- State-of-the-art video-language model
|
| 94 |
+
- Support for ActivityNet and YouCook2 datasets
|
| 95 |
+
- Pre-extracted features for efficient training
|
| 96 |
+
|
| 97 |
+
## 📊 Dataset Support
|
| 98 |
+
|
| 99 |
+
This exercise supports two major datasets:
|
| 100 |
+
|
| 101 |
+
1. **ActivityNet** (301GB pre-extracted features)
|
| 102 |
+
2. **YouCook2** (32GB pre-extracted features)
|
| 103 |
+
|
| 104 |
+
⚠️ **Storage Warning**: The pre-extracted features require significant storage space. Please ensure you have adequate disk space.
|
| 105 |
+
|
| 106 |
+
## 🔗 Related Resources
|
| 107 |
+
|
| 108 |
+
- **Paper**: [VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning](https://arxiv.org/pdf/2501.06761)
|
| 109 |
+
- **Main Repository**: [mlvlab/VidChain](https://github.com/mlvlab/VidChain)
|
| 110 |
+
- **Dataset**: [simplecloud/VidChain-Data](https://huggingface.co/datasets/simplecloud/VidChain-Data)
|
| 111 |
+
- **VideoLLaMA2**: [DAMO-NLP-SG/VideoLLaMA2](https://github.com/DAMO-NLP-SG/VideoLLaMA2)
|
| 112 |
+
|
| 113 |
+
## 📝 Citation
|
| 114 |
+
|
| 115 |
+
If you find this work useful, please cite:
|
| 116 |
+
|
| 117 |
+
```bibtex
|
| 118 |
+
@inproceedings{lee2025vidchain,
|
| 119 |
+
title={VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning},
|
| 120 |
+
author={Lee, Ji Soo and Kim, Jongha and Na, Jeehye and Park, Jinyoung and Kim, Hyunwoo J},
|
| 121 |
+
booktitle={AAAI},
|
| 122 |
+
year={2025}
|
| 123 |
+
}
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
## 🤝 Contributing
|
| 127 |
+
|
| 128 |
+
This is an exercise repository for educational purposes. For contributions to the main VidChain project, please visit the [main repository](https://github.com/mlvlab/VidChain).
|
| 129 |
+
|
| 130 |
+
## 📄 License
|
| 131 |
+
|
| 132 |
+
This project is released under the same license as the main VidChain repository. Please refer to the main repository for license details.
|
| 133 |
+
|
| 134 |
+
---
|
| 135 |
+
|
| 136 |
+
<div align="center">
|
| 137 |
+
<p><em>Built with ❤️ for the AI research community</em></p>
|
| 138 |
+
<p><em>Part of the VidChain research project at AAAI 2025</em></p>
|
| 139 |
+
</div>
|