Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,120 +1,37 @@
|
|
| 1 |
-
# 🎥 VidChain Exercise: Chain-of-Tasks with Metric-based Direct Preference Optimization
|
| 2 |
-
|
| 3 |
<p align="center">
|
| 4 |
-
<
|
| 5 |
-
<
|
| 6 |
-
|
| 7 |
-
|
|
|
|
| 8 |
|
| 9 |
-
<
|
| 10 |
-
|
| 11 |
-
</
|
| 12 |
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
**VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning**
|
| 19 |
-
*Ji Soo Lee*, Jongha Kim*, Jeehye Na, Jinyoung Park, Hyunwoo J. Kim†
|
| 20 |
-
AAAI 2025
|
| 21 |
|
| 22 |
-
## 🚀 Learning Objectives
|
| 23 |
|
|
|
|
| 24 |
By working through this exercise, you will:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
-
|
| 27 |
-
- 🔍 **Observe limitations** of existing approaches in temporal reasoning and coherence
|
| 28 |
-
- 🛠️ **Implement and experiment** with VidChain's improvements using M-DPO
|
| 29 |
-
- 🎬 **Run inference** on videos to generate dense temporal captions (Dense Video Captioning)
|
| 30 |
-
- 📊 **Evaluate** how preference alignment improves performance over baselines
|
| 31 |
-
- 💡 **Discuss strategies** for ensembling different reasoning paths of VidChain's CoTasks
|
| 32 |
-
|
| 33 |
-
## 📁 Repository Structure
|
| 34 |
|
|
|
|
| 35 |
```
|
| 36 |
-
VidChain-exercise/
|
| 37 |
-
├── README_HF.md # This file - Hugging Face README
|
| 38 |
-
├── READ.md # Original exercise README
|
| 39 |
-
├── upload.py # Upload script for Hugging Face Hub
|
| 40 |
-
├── upload_single_file.py # Single file upload utility
|
| 41 |
-
├── remove_file.py # File removal utility
|
| 42 |
-
├── setup_hf_upload.py # Setup script for HF upload
|
| 43 |
-
├── HF_UPLOAD_GUIDE.md # Comprehensive upload guide
|
| 44 |
-
├── requirements.txt # Python dependencies
|
| 45 |
-
├── app.py # Streamlit app for HF Spaces
|
| 46 |
-
├── asset/ # Project assets and images
|
| 47 |
-
│ └── main.png # Main framework diagram
|
| 48 |
-
└── VTimeLLM/ # VideoLLM implementation
|
| 49 |
-
└── ... # VTimeLLM source code
|
| 50 |
-
```
|
| 51 |
-
|
| 52 |
-
## 🔧 Quick Start
|
| 53 |
-
|
| 54 |
-
### 1. Install Dependencies
|
| 55 |
-
```bash
|
| 56 |
-
pip install -r requirements.txt
|
| 57 |
-
```
|
| 58 |
-
|
| 59 |
-
### 2. Setup for VideoLLaMA2
|
| 60 |
-
```bash
|
| 61 |
-
# Clone the main repository
|
| 62 |
-
git clone https://github.com/mlvlab/VidChain.git
|
| 63 |
-
cd VidChain
|
| 64 |
-
|
| 65 |
-
# Install VideoLLaMA2 dependencies
|
| 66 |
-
conda create -n videollama python=3.10 -y
|
| 67 |
-
conda activate videollama
|
| 68 |
-
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
|
| 69 |
-
|
| 70 |
-
cd VideoLLaMA2
|
| 71 |
-
pip install -r requirements.txt
|
| 72 |
-
pip install num2words datasets pycocoevalcap rich
|
| 73 |
-
pip install flash-attn==2.5.7 --no-build-isolation
|
| 74 |
-
```
|
| 75 |
-
|
| 76 |
-
### 3. Download Pre-trained Models
|
| 77 |
-
- **VideoLLaMA2 checkpoints**: [Download from official repo](https://github.com/DAMO-NLP-SG/VideoLLaMA2?tab=readme-ov-file#earth_americas-model-zoo)
|
| 78 |
-
- **VidChain checkpoints**: [Download from Hugging Face](https://huggingface.co/datasets/simplecloud/VidChain-Data)
|
| 79 |
-
|
| 80 |
-
## 🎯 Key Features
|
| 81 |
-
|
| 82 |
-
### Chain-of-Tasks (CoTasks)
|
| 83 |
-
- Novel approach to video understanding through task decomposition
|
| 84 |
-
- Improves temporal reasoning capabilities
|
| 85 |
-
- Enhanced coherence in video captioning
|
| 86 |
-
|
| 87 |
-
### Metric-based Direct Preference Optimization (M-DPO)
|
| 88 |
-
- Advanced training methodology for preference alignment
|
| 89 |
-
- Better performance over traditional baselines
|
| 90 |
-
- Improved temporal consistency
|
| 91 |
-
|
| 92 |
-
### VideoLLaMA2 Integration
|
| 93 |
-
- State-of-the-art video-language model
|
| 94 |
-
- Support for ActivityNet and YouCook2 datasets
|
| 95 |
-
- Pre-extracted features for efficient training
|
| 96 |
-
|
| 97 |
-
## 📊 Dataset Support
|
| 98 |
-
|
| 99 |
-
This exercise supports two major datasets:
|
| 100 |
-
|
| 101 |
-
1. **ActivityNet** (301GB pre-extracted features)
|
| 102 |
-
2. **YouCook2** (32GB pre-extracted features)
|
| 103 |
-
|
| 104 |
-
⚠️ **Storage Warning**: The pre-extracted features require significant storage space. Please ensure you have adequate disk space.
|
| 105 |
-
|
| 106 |
-
## 🔗 Related Resources
|
| 107 |
-
|
| 108 |
-
- **Paper**: [VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning](https://arxiv.org/pdf/2501.06761)
|
| 109 |
-
- **Main Repository**: [mlvlab/VidChain](https://github.com/mlvlab/VidChain)
|
| 110 |
-
- **Dataset**: [simplecloud/VidChain-Data](https://huggingface.co/datasets/simplecloud/VidChain-Data)
|
| 111 |
-
- **VideoLLaMA2**: [DAMO-NLP-SG/VideoLLaMA2](https://github.com/DAMO-NLP-SG/VideoLLaMA2)
|
| 112 |
-
|
| 113 |
-
## 📝 Citation
|
| 114 |
-
|
| 115 |
-
If you find this work useful, please cite:
|
| 116 |
-
|
| 117 |
-
```bibtex
|
| 118 |
@inproceedings{lee2025vidchain,
|
| 119 |
title={VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning},
|
| 120 |
author={Lee, Ji Soo and Kim, Jongha and Na, Jeehye and Park, Jinyoung and Kim, Hyunwoo J},
|
|
@@ -122,18 +39,3 @@ If you find this work useful, please cite:
|
|
| 122 |
year={2025}
|
| 123 |
}
|
| 124 |
```
|
| 125 |
-
|
| 126 |
-
## 🤝 Contributing
|
| 127 |
-
|
| 128 |
-
This is an exercise repository for educational purposes. For contributions to the main VidChain project, please visit the [main repository](https://github.com/mlvlab/VidChain).
|
| 129 |
-
|
| 130 |
-
## 📄 License
|
| 131 |
-
|
| 132 |
-
This project is released under the same license as the main VidChain repository. Please refer to the main repository for license details.
|
| 133 |
-
|
| 134 |
-
---
|
| 135 |
-
|
| 136 |
-
<div align="center">
|
| 137 |
-
<p><em>Built with ❤️ for the AI research community</em></p>
|
| 138 |
-
<p><em>Part of the VidChain research project at AAAI 2025</em></p>
|
| 139 |
-
</div>
|
|
|
|
|
|
|
|
|
|
| 1 |
<p align="center">
|
| 2 |
+
<h1 align="center"> ✏️ Data for VidChain Excercise</h1>
|
| 3 |
+
<h2 align="center">VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning</h2>
|
| 4 |
+
|
| 5 |
+
<p align="center">Ji Soo Lee*, Jongha Kim*, Jeehye Na, Jinyoung Park, Hyunwoo J. Kim†.
|
| 6 |
+
</p>
|
| 7 |
|
| 8 |
+
<h2 align="center">
|
| 9 |
+
AAAI 2025
|
| 10 |
+
</h2>
|
| 11 |
|
| 12 |
+
<h3 align="center">
|
| 13 |
+
<a href="https://arxiv.org/pdf/2501.06761" target='_blank'><img src="https://img.shields.io/badge/arXiv-2501.06761-b31b1b.svg"></a>
|
| 14 |
+
<a href="https://huggingface.co/datasets/simplecloud/VidChain-Data"><img src="https://img.shields.io/badge/huggingface-datasets-yellow"></a>
|
| 15 |
+
</h3>
|
| 16 |
|
| 17 |
+
<div align="center">
|
| 18 |
+
<img src="asset/main.png" width="750px" />
|
| 19 |
+
</div>
|
|
|
|
|
|
|
|
|
|
| 20 |
|
|
|
|
| 21 |
|
| 22 |
+
## 🎯 Learning Objectives
|
| 23 |
By working through this exercise, you will:
|
| 24 |
+
- Reproduce baseline behavior of a video-language model (**VTimeLLM**, CVPR 2024 Highlight).
|
| 25 |
+
- Observe the limitations of existing approaches in temporal reasoning and coherence.
|
| 26 |
+
- Implement and experiment with **VidChain's improvements** using M-DPO.
|
| 27 |
+
- Run inference on videos to generate **dense temporal captions (Dense Video Captioning)**.
|
| 28 |
+
- Evaluate how preference alignment improves performance over baselines.
|
| 29 |
+
- Discuss potential strategies for ensembling different reasoning paths of VidChain's CoTasks.
|
| 30 |
|
| 31 |
+
<br>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
+
## Citations 🌱
|
| 34 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
@inproceedings{lee2025vidchain,
|
| 36 |
title={VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning},
|
| 37 |
author={Lee, Ji Soo and Kim, Jongha and Na, Jeehye and Park, Jinyoung and Kim, Hyunwoo J},
|
|
|
|
| 39 |
year={2025}
|
| 40 |
}
|
| 41 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|