|
|
--- |
|
|
task_categories: |
|
|
- video-text-to-text |
|
|
license: cc-by-4.0 |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- video-detection |
|
|
- ai-generated-content |
|
|
- explainable-ai |
|
|
- multimodal |
|
|
--- |
|
|
|
|
|
# ViF-CoT-4K Dataset |
|
|
|
|
|
This repository hosts the **ViF-CoT-4K** dataset, a key component of the [Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning](https://huggingface.co/papers/2512.15693) paper. Skyra is a specialized multimodal large language model (MLLM) designed to identify human-perceivable visual artifacts in AI-generated videos, leveraging them as grounded evidence for both detection and explanation. |
|
|
|
|
|
- **Paper**: [Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning](https://huggingface.co/papers/2512.15693) |
|
|
- **Project Page**: https://joeleelyf.github.io/Skyra/ |
|
|
- **Code**: https://github.com/JoeLeelyf/Skyra |
|
|
|
|
|
## Introduction |
|
|
|
|
|
The misuse of AI-driven video generation technologies has raised serious social concerns, highlighting the urgent need for reliable AI-generated video detectors. Most existing methods are limited to binary classification and lack the necessary explanations for human interpretation. **ViF-CoT-4K** addresses this by providing a specialized dataset to train multimodal large language models (MLLMs) to identify human-perceivable visual artifacts in AI-generated videos and leverage them as grounded evidence for both detection and explanation. |
|
|
|
|
|
ViF-CoT-4K represents the first large-scale AI-generated video artifact dataset with fine-grained human annotations, supporting the development of models capable of spatio-temporal artifact perception, explanation capability, and detection accuracy. |
|
|
|
|
|
### Hierarchical Artifact Taxonomy |
|
|
|
|
|
The dataset defines a comprehensive taxonomy to categorize AI generation errors, dividing them into **Low-level Forgery** (e.g., texture/color anomalies) and **Violation of Laws** (e.g., physical inconsistencies). |
|
|
|
|
|
<p align="center"> |
|
|
<img src="https://github.com/JoeLeelyf/Skyra/raw/main/static/images/taxonomy.png" alt="Taxonomy of Artifacts" width="60%"> |
|
|
</p> |
|
|
|
|
|
## Dataset: ViF-CoT-4K |
|
|
|
|
|
**ViF-CoT-4K** is constructed to address the lack of detailed artifact annotations in existing datasets. |
|
|
|
|
|
- **Scale**: ~4,000 videos, including high-quality samples from **Sora-2, Wan2.1, Kling**, and more. |
|
|
- **Annotation**: Fine-grained labels including artifact type, textual explanation, timestamps, and bounding boxes. |
|
|
- **Real-Fake Pairs**: Generated videos are semantically aligned with real counterparts to prevent shortcut learning. |
|
|
<p align="center"> |
|
|
<img src="https://github.com/JoeLeelyf/Skyra/raw/main/static/images/statistics.png" alt="Dataset Statistics" width="90%"> |
|
|
</p> |
|
|
|
|
|
## Usage |
|
|
|
|
|
### Requirements |
|
|
- **SFT Stage**: follow [LlaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) for environment setup. |
|
|
- **RL Stage**: follow [verl](https://github.com/volcengine/verl) for environment setup. |
|
|
- **Inference**: follow [Qwen-2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) for quick start and [vLLM](https://github.com/vllm-project/vllm) for deployment. |
|
|
|
|
|
### Data Preparation |
|
|
- Training data: Download and prepare the **ViF-CoT-4K** dataset from [here](https://huggingface.co/datasets/JoeLeelyf/ViF-CoT-4K). |
|
|
|
|
|
- Evaluation data: Download evaluation datasets (e.g., **ViF-Bench**) from [here](https://huggingface.co/datasets/JoeLeelyf/ViF-Bench). And modify the path to your local directory in `test_index.json`. |
|
|
The `test_index.json` file should contain the following format: |
|
|
```json |
|
|
{ |
|
|
"Real": [ |
|
|
"path_to_parsed_frames_dir/Real/gdymHI9S6gM-0", |
|
|
... |
|
|
], |
|
|
"LTX-Video-13B-T": [ |
|
|
"path_to_parsed_frames_dir/Fake/LTX-Video-13B-T/gdymHI9S6gM-0", |
|
|
... |
|
|
], |
|
|
... |
|
|
} |
|
|
``` |
|
|
|
|
|
### Supervised Fine-Tuning (SFT) |
|
|
We use LLaMA-Factory for SFT. You can start training after setup the dataset config following the instructions in the LLaMA-Factory repository. |
|
|
|
|
|
```bash |
|
|
cd train/LLaMA-Factory |
|
|
bash train.sh |
|
|
``` |
|
|
|
|
|
### Reinforcement Learning (RL) |
|
|
We use verl for RL training with GRPO, with adapted reward design provided in `train/verl/verl/utils/reward_score/ladm.py`. |
|
|
|
|
|
### Evaluation |
|
|
|
|
|
Evaluate scripts are provided in the `eval/` directory. You can run the evaluation script as follows: |
|
|
|
|
|
- inference: Run inference to get model predictions and explanations, save the results in a JSON file. |
|
|
```bash |
|
|
cd eval |
|
|
bash scripts/Skyra/inference.sh |
|
|
# or |
|
|
python inference.py \ |
|
|
--index_json /path_to/test_index.json \ |
|
|
--model_path /path_to/Skyra-SFT \ |
|
|
--model_name Skyra-SFT \ |
|
|
--save_dir results/Skyra |
|
|
``` |
|
|
|
|
|
- evaluation: Evaluate the model predictions against ground truth and compute metrics. |
|
|
```bash |
|
|
cd eval |
|
|
bash scripts/Skyra/eval.sh |
|
|
# or |
|
|
python eval.py \ |
|
|
--json_file_path results/Skyra/Skyra-SFT_predictions.json |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
The **ViF-CoT-4K** dataset and **Skyra** model weights are released under the **CC BY 4.0** license. Users must adhere to the terms of source datasets (Kinetics-400, Panda-70M, HD-VILA-100M). |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you find Skyra or ViF-CoT-4K useful, please cite our paper: |
|
|
|
|
|
```bibtex |
|
|
@misc{li2025skyraaigeneratedvideodetection, |
|
|
title={Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning}, |
|
|
author={Yifei Li and Wenzhao Zheng and Yanran Zhang and Runze Sun and Yu Zheng and Lei Chen and Jie Zhou and Jiwen Lu}, |
|
|
year={2025}, |
|
|
eprint={2512.15693}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2512.15693}, |
|
|
} |
|
|
``` |