Add dataset card for VideoSSR-30K
#1
by
nielsr HF Staff - opened
README.md
ADDED
|
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- video-text-to-text
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
license: unknown
|
| 7 |
+
tags:
|
| 8 |
+
- reinforcement-learning
|
| 9 |
+
- self-supervised-learning
|
| 10 |
+
- video-understanding
|
| 11 |
+
- mllm
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# VideoSSR-30K Dataset
|
| 15 |
+
|
| 16 |
+
This repository contains the **VideoSSR-30K dataset**, introduced in the paper [VideoSSR: Video Self-Supervised Reinforcement Learning](https://huggingface.co/papers/2511.06281).
|
| 17 |
+
|
| 18 |
+
**Project Page & Code:** [https://github.com/lcqysl/VideoSSR](https://github.com/lcqysl/VideoSSR)
|
| 19 |
+
|
| 20 |
+
## Paper Abstract
|
| 21 |
+
|
| 22 |
+
Reinforcement Learning with Verifiable Rewards (RLVR) has substantially advanced the video understanding capabilities of Multimodal Large Language Models (MLLMs). However, the rapid progress of MLLMs is outpacing the complexity of existing video datasets, while the manual annotation of new, high-quality data remains prohibitively expensive. This work investigates a pivotal question: Can the rich, intrinsic information within videos be harnessed to self-generate high-quality, verifiable training data? To investigate this, we introduce three self-supervised pretext tasks: Anomaly Grounding, Object Counting, and Temporal Jigsaw. We construct the Video Intrinsic Understanding Benchmark (VIUBench) to validate their difficulty, revealing that current state-of-the-art MLLMs struggle significantly on these tasks. Building upon these pretext tasks, we develop the VideoSSR-30K dataset and propose VideoSSR, a novel video self-supervised reinforcement learning framework for RLVR. Extensive experiments across 17 benchmarks, spanning four major video domains (General Video QA, Long Video QA, Temporal Grounding, and Complex Reasoning), demonstrate that VideoSSR consistently enhances model performance, yielding an average improvement of over 5%. These results establish VideoSSR as a potent foundational framework for developing more advanced video understanding in MLLMs.
|
| 23 |
+
|
| 24 |
+
## Pretext Tasks
|
| 25 |
+
|
| 26 |
+
VideoSSR-30K is built upon three self-supervised pretext tasks designed to leverage intrinsic video information for generating high-quality, verifiable training data. These tasks include Anomaly Grounding, Object Counting, and Temporal Jigsaw.
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
|
| 30 |
+
## VIUBench
|
| 31 |
+
|
| 32 |
+
To rigorously test the capabilities of modern MLLMs on fundamental video understanding, the **V**ideo **I**ntrinsic **U**nderstanding **Bench**mark (**VIUBench**) was introduced. This benchmark is systematically constructed from the three self-supervised pretext tasks (Anomaly Grounding, Object Counting, and Temporal Jigsaw). It specifically evaluates a model's ability to reason about intrinsic video properties—such as temporal coherence and fine-grained details—independent of external annotations. Our results show that VIUBench poses a significant challenge even for the most advanced models, highlighting a critical area for improvement and validating the effectiveness of our approach.
|
| 33 |
+
|
| 34 |
+

|
| 35 |
+
|
| 36 |
+
## Data Format
|
| 37 |
+
|
| 38 |
+
To facilitate standardized testing, the data for all evaluation tasks are organized into the following JSON format.
|
| 39 |
+
|
| 40 |
+
```json
|
| 41 |
+
{
|
| 42 |
+
"video": "fFjv93ACGo8",
|
| 43 |
+
"question": "...",
|
| 44 |
+
"answer": "C"
|
| 45 |
+
}
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
## Sample Usage
|
| 49 |
+
|
| 50 |
+
### Training with VideoSSR-30K
|
| 51 |
+
|
| 52 |
+
First, download the VideoSSR-30K dataset or build your own training data.
|
| 53 |
+
Then, run the training script:
|
| 54 |
+
|
| 55 |
+
```bash
|
| 56 |
+
bash ./train/train.sh
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
### Evaluation
|
| 60 |
+
|
| 61 |
+
**Video QA**
|
| 62 |
+
|
| 63 |
+
```bash
|
| 64 |
+
python ./eval/vqa.py
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
**Temporal Grounding**
|
| 68 |
+
|
| 69 |
+
```bash
|
| 70 |
+
python ./eval/vtg.py
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
### Prepare Your Own Data
|
| 74 |
+
|
| 75 |
+
The repository provides the necessary scripts in the `./pretext_tasks` directory for you to create your own datasets using our self-supervised methods.
|
| 76 |
+
|
| 77 |
+
The process involves two main stages:
|
| 78 |
+
|
| 79 |
+
1. **Frame Sampling**
|
| 80 |
+
|
| 81 |
+
First, prepare your source videos. Then, use the `sample_frames.py` script to preprocess them and extract frames. This step prepares the visual data in a format required by the task generation scripts.
|
| 82 |
+
|
| 83 |
+
```bash
|
| 84 |
+
# Example usage:
|
| 85 |
+
python ./pretext_tasks/sample_frames.py
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
2. **Generating Pretext Task Data**
|
| 89 |
+
|
| 90 |
+
Once your frames are sampled, you can use the following scripts to generate training data for each of our self-supervised pretext tasks:
|
| 91 |
+
|
| 92 |
+
* `grounding.py`: To create data for the Anomaly Grounding task.
|
| 93 |
+
* `counting.py`: To create data for the Object Counting task.
|
| 94 |
+
* `jigsaw.py`: To create data for the Temporal Jigsaw task.
|
| 95 |
+
|
| 96 |
+
```bash
|
| 97 |
+
python ./pretext_tasks/grounding.py
|
| 98 |
+
python ./pretext_tasks/counting.py
|
| 99 |
+
python ./pretext_tasks/jigsaw.py
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
## Citation
|
| 103 |
+
|
| 104 |
+
If you use the VideoSSR-30K dataset or find this work useful, please cite the original paper:
|
| 105 |
+
|
| 106 |
+
```bibtex
|
| 107 |
+
@article{he2025videosssr,
|
| 108 |
+
title={VideoSSR: Video Self-Supervised Reinforcement Learning},
|
| 109 |
+
author={He, Zefeng and Qu, Xiaoye and Li, Yafu and Huang, Siyuan and Liu, Daizong and Cheng, Yu},
|
| 110 |
+
journal={arXiv preprint arXiv:2511.06281},
|
| 111 |
+
year={2025}
|
| 112 |
+
}
|
| 113 |
+
```
|