Add comprehensive dataset card for ViF-CoT-4K

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +127 -0
README.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - video-text-to-text
4
+ license: cc-by-4.0
5
+ language:
6
+ - en
7
+ tags:
8
+ - video-detection
9
+ - ai-generated-content
10
+ - explainable-ai
11
+ - multimodal
12
+ ---
13
+
14
+ # ViF-CoT-4K Dataset
15
+
16
+ This repository hosts the **ViF-CoT-4K** dataset, a key component of the [Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning](https://huggingface.co/papers/2512.15693) paper. Skyra is a specialized multimodal large language model (MLLM) designed to identify human-perceivable visual artifacts in AI-generated videos, leveraging them as grounded evidence for both detection and explanation.
17
+
18
+ - **Paper**: [Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning](https://huggingface.co/papers/2512.15693)
19
+ - **Project Page**: https://joeleelyf.github.io/Skyra/
20
+ - **Code**: https://github.com/JoeLeelyf/Skyra
21
+
22
+ ## Introduction
23
+
24
+ The misuse of AI-driven video generation technologies has raised serious social concerns, highlighting the urgent need for reliable AI-generated video detectors. Most existing methods are limited to binary classification and lack the necessary explanations for human interpretation. **ViF-CoT-4K** addresses this by providing a specialized dataset to train multimodal large language models (MLLMs) to identify human-perceivable visual artifacts in AI-generated videos and leverage them as grounded evidence for both detection and explanation.
25
+
26
+ ViF-CoT-4K represents the first large-scale AI-generated video artifact dataset with fine-grained human annotations, supporting the development of models capable of spatio-temporal artifact perception, explanation capability, and detection accuracy.
27
+
28
+ ### Hierarchical Artifact Taxonomy
29
+
30
+ The dataset defines a comprehensive taxonomy to categorize AI generation errors, dividing them into **Low-level Forgery** (e.g., texture/color anomalies) and **Violation of Laws** (e.g., physical inconsistencies).
31
+
32
+ <p align="center">
33
+ <img src="https://github.com/JoeLeelyf/Skyra/raw/main/static/images/taxonomy.png" alt="Taxonomy of Artifacts" width="60%">
34
+ </p>
35
+
36
+ ## Dataset: ViF-CoT-4K
37
+
38
+ **ViF-CoT-4K** is constructed to address the lack of detailed artifact annotations in existing datasets.
39
+
40
+ - **Scale**: ~4,000 videos, including high-quality samples from **Sora-2, Wan2.1, Kling**, and more.
41
+ - **Annotation**: Fine-grained labels including artifact type, textual explanation, timestamps, and bounding boxes.
42
+ - **Real-Fake Pairs**: Generated videos are semantically aligned with real counterparts to prevent shortcut learning.
43
+ <p align="center">
44
+ <img src="https://github.com/JoeLeelyf/Skyra/raw/main/static/images/statistics.png" alt="Dataset Statistics" width="90%">
45
+ </p>
46
+
47
+ ## Usage
48
+
49
+ ### Requirements
50
+ - **SFT Stage**: follow [LlaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) for environment setup.
51
+ - **RL Stage**: follow [verl](https://github.com/volcengine/verl) for environment setup.
52
+ - **Inference**: follow [Qwen-2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) for quick start and [vLLM](https://github.com/vllm-project/vllm) for deployment.
53
+
54
+ ### Data Preparation
55
+ - Training data: Download and prepare the **ViF-CoT-4K** dataset from [here](https://huggingface.co/datasets/JoeLeelyf/ViF-CoT-4K).
56
+
57
+ - Evaluation data: Download evaluation datasets (e.g., **ViF-Bench**) from [here](https://huggingface.co/datasets/JoeLeelyf/ViF-Bench). And modify the path to your local directory in `test_index.json`.
58
+ The `test_index.json` file should contain the following format:
59
+ ```json
60
+ {
61
+ "Real": [
62
+ "path_to_parsed_frames_dir/Real/gdymHI9S6gM-0",
63
+ ...
64
+ ],
65
+ "LTX-Video-13B-T": [
66
+ "path_to_parsed_frames_dir/Fake/LTX-Video-13B-T/gdymHI9S6gM-0",
67
+ ...
68
+ ],
69
+ ...
70
+ }
71
+ ```
72
+
73
+ ### Supervised Fine-Tuning (SFT)
74
+ We use LLaMA-Factory for SFT. You can start training after setup the dataset config following the instructions in the LLaMA-Factory repository.
75
+
76
+ ```bash
77
+ cd train/LLaMA-Factory
78
+ bash train.sh
79
+ ```
80
+
81
+ ### Reinforcement Learning (RL)
82
+ We use verl for RL training with GRPO, with adapted reward design provided in `train/verl/verl/utils/reward_score/ladm.py`.
83
+
84
+ ### Evaluation
85
+
86
+ Evaluate scripts are provided in the `eval/` directory. You can run the evaluation script as follows:
87
+
88
+ - inference: Run inference to get model predictions and explanations, save the results in a JSON file.
89
+ ```bash
90
+ cd eval
91
+ bash scripts/Skyra/inference.sh
92
+ # or
93
+ python inference.py \
94
+ --index_json /path_to/test_index.json \
95
+ --model_path /path_to/Skyra-SFT \
96
+ --model_name Skyra-SFT \
97
+ --save_dir results/Skyra
98
+ ```
99
+
100
+ - evaluation: Evaluate the model predictions against ground truth and compute metrics.
101
+ ```bash
102
+ cd eval
103
+ bash scripts/Skyra/eval.sh
104
+ # or
105
+ python eval.py \
106
+ --json_file_path results/Skyra/Skyra-SFT_predictions.json
107
+ ```
108
+
109
+ ## License
110
+
111
+ The **ViF-CoT-4K** dataset and **Skyra** model weights are released under the **CC BY 4.0** license. Users must adhere to the terms of source datasets (Kinetics-400, Panda-70M, HD-VILA-100M).
112
+
113
+ ## Citation
114
+
115
+ If you find Skyra or ViF-CoT-4K useful, please cite our paper:
116
+
117
+ ```bibtex
118
+ @misc{li2025skyraaigeneratedvideodetection,
119
+ title={Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning},
120
+ author={Yifei Li and Wenzhao Zheng and Yanran Zhang and Runze Sun and Yu Zheng and Lei Chen and Jie Zhou and Jiwen Lu},
121
+ year={2025},
122
+ eprint={2512.15693},
123
+ archivePrefix={arXiv},
124
+ primaryClass={cs.CV},
125
+ url={https://arxiv.org/abs/2512.15693},
126
+ }
127
+ ```