File size: 5,459 Bytes
10a0f39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
task_categories:
- video-text-to-text
license: cc-by-4.0
language:
- en
tags:
- video-detection
- ai-generated-content
- explainable-ai
- multimodal
---

# ViF-CoT-4K Dataset

This repository hosts the **ViF-CoT-4K** dataset, a key component of the [Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning](https://huggingface.co/papers/2512.15693) paper. Skyra is a specialized multimodal large language model (MLLM) designed to identify human-perceivable visual artifacts in AI-generated videos, leveraging them as grounded evidence for both detection and explanation.

-   **Paper**: [Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning](https://huggingface.co/papers/2512.15693)
-   **Project Page**: https://joeleelyf.github.io/Skyra/
-   **Code**: https://github.com/JoeLeelyf/Skyra

## Introduction

The misuse of AI-driven video generation technologies has raised serious social concerns, highlighting the urgent need for reliable AI-generated video detectors. Most existing methods are limited to binary classification and lack the necessary explanations for human interpretation. **ViF-CoT-4K** addresses this by providing a specialized dataset to train multimodal large language models (MLLMs) to identify human-perceivable visual artifacts in AI-generated videos and leverage them as grounded evidence for both detection and explanation.

ViF-CoT-4K represents the first large-scale AI-generated video artifact dataset with fine-grained human annotations, supporting the development of models capable of spatio-temporal artifact perception, explanation capability, and detection accuracy.

### Hierarchical Artifact Taxonomy

The dataset defines a comprehensive taxonomy to categorize AI generation errors, dividing them into **Low-level Forgery** (e.g., texture/color anomalies) and **Violation of Laws** (e.g., physical inconsistencies).

<p align="center">
  <img src="https://github.com/JoeLeelyf/Skyra/raw/main/static/images/taxonomy.png" alt="Taxonomy of Artifacts" width="60%">
</p>

## Dataset: ViF-CoT-4K

**ViF-CoT-4K** is constructed to address the lack of detailed artifact annotations in existing datasets.

-   **Scale**: ~4,000 videos, including high-quality samples from **Sora-2, Wan2.1, Kling**, and more.
-   **Annotation**: Fine-grained labels including artifact type, textual explanation, timestamps, and bounding boxes.
-   **Real-Fake Pairs**: Generated videos are semantically aligned with real counterparts to prevent shortcut learning.
<p align="center">
  <img src="https://github.com/JoeLeelyf/Skyra/raw/main/static/images/statistics.png" alt="Dataset Statistics" width="90%">
</p>

## Usage

### Requirements
-   **SFT Stage**: follow [LlaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) for environment setup.
-   **RL Stage**: follow [verl](https://github.com/volcengine/verl) for environment setup.
-   **Inference**: follow [Qwen-2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) for quick start and [vLLM](https://github.com/vllm-project/vllm) for deployment.

### Data Preparation
-   Training data: Download and prepare the **ViF-CoT-4K** dataset from [here](https://huggingface.co/datasets/JoeLeelyf/ViF-CoT-4K).

-   Evaluation data: Download evaluation datasets (e.g., **ViF-Bench**) from [here](https://huggingface.co/datasets/JoeLeelyf/ViF-Bench). And modify the path to your local directory in `test_index.json`.
The `test_index.json` file should contain the following format:
```json
{
    "Real": [
        "path_to_parsed_frames_dir/Real/gdymHI9S6gM-0",
        ...
    ],
    "LTX-Video-13B-T": [
        "path_to_parsed_frames_dir/Fake/LTX-Video-13B-T/gdymHI9S6gM-0",
        ...
    ],
    ...
}
```

### Supervised Fine-Tuning (SFT)
We use LLaMA-Factory for SFT. You can start training after setup the dataset config following the instructions in the LLaMA-Factory repository.

```bash
cd train/LLaMA-Factory
bash train.sh
```

### Reinforcement Learning (RL)
We use verl for RL training with GRPO, with adapted reward design provided in `train/verl/verl/utils/reward_score/ladm.py`.

### Evaluation

Evaluate scripts are provided in the `eval/` directory. You can run the evaluation script as follows:

-   inference: Run inference to get model predictions and explanations, save the results in a JSON file.
```bash
cd eval
bash scripts/Skyra/inference.sh
# or
python inference.py \
    --index_json /path_to/test_index.json \
    --model_path /path_to/Skyra-SFT \
    --model_name Skyra-SFT \
    --save_dir results/Skyra
```

-   evaluation: Evaluate the model predictions against ground truth and compute metrics.
```bash
cd eval
bash scripts/Skyra/eval.sh
# or
python eval.py \
    --json_file_path results/Skyra/Skyra-SFT_predictions.json
```

## License

The **ViF-CoT-4K** dataset and **Skyra** model weights are released under the **CC BY 4.0** license. Users must adhere to the terms of source datasets (Kinetics-400, Panda-70M, HD-VILA-100M).

## Citation

If you find Skyra or ViF-CoT-4K useful, please cite our paper:

```bibtex
@misc{li2025skyraaigeneratedvideodetection,
      title={Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning}, 
      author={Yifei Li and Wenzhao Zheng and Yanran Zhang and Runze Sun and Yu Zheng and Lei Chen and Jie Zhou and Jiwen Lu},
      year={2025},
      eprint={2512.15693},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2512.15693}, 
}
```