|
|
--- |
|
|
pretty_name: QuadVoxBench |
|
|
language: |
|
|
- en |
|
|
- zh |
|
|
license: mit |
|
|
tags: |
|
|
- audio |
|
|
- deepfake-detection |
|
|
- anti-spoofing |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
task_categories: |
|
|
- audio-classification |
|
|
- voice-activity-detection |
|
|
--- |
|
|
# QuadVoxBench: A Large-Scale Fine-Grained Benchmark for Robust Audio Deepfake Detection |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
QuadVoxBench is a large-scale (392+ hours) audio benchmark dataset designed for the robust evaluation of deepfake detection systems. It is structured around four key aspects of audio variation: **Speech Style**, **Emotional Prosody**, **Acoustic Environment**, and **Manipulation Type**. The dataset features a diverse collection of real and synthetically generated audio in English and Chinese, created using a comprehensive toolkit of modern text-to-speech (TTS) and voice conversion (VC) models. |
|
|
|
|
|
For more details, please refer to our [main repository](https://github.com/wtalioy/QuadVox): |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The dataset is organized into 11 subsets, each corresponding to a specific domain or speech characteristic. Each subset contains real and fake audio samples, along with metadata. |
|
|
|
|
|
``` |
|
|
├── Audiobook/ |
|
|
│ ├── audio/ |
|
|
│ │ ├── real/ |
|
|
│ │ └── fake/ |
|
|
│ └── meta_test.json |
|
|
├── Emotional/ |
|
|
│ ├── audio/ |
|
|
│ │ ├── real/ |
|
|
│ │ └── fake/ |
|
|
│ └── meta_test.json |
|
|
├── Interview/ |
|
|
│ ├── audio/ |
|
|
│ │ ├── real/ |
|
|
│ │ └── fake/ |
|
|
│ └── meta_test.json |
|
|
├── Movie/ |
|
|
│ ├── audio/ |
|
|
│ │ ├── real/ |
|
|
│ │ └── fake/ |
|
|
│ └── meta_test.json |
|
|
├── News/ |
|
|
│ ├── audio/ |
|
|
│ │ ├── real/ |
|
|
│ │ └── fake/ |
|
|
│ └── meta_test.json |
|
|
├── NoisySpeech/ |
|
|
│ ├── audio/ |
|
|
│ │ ├── real/ |
|
|
│ │ └── fake/ |
|
|
│ └── meta_test.json |
|
|
├── PartialFake/ |
|
|
│ ├── audio/ |
|
|
│ │ ├── real/ |
|
|
│ │ └── fake/ |
|
|
│ └── meta.json |
|
|
├── PhoneCall/ |
|
|
│ ├── en/ |
|
|
│ │ ├── audio/ |
|
|
│ │ │ ├── real/ |
|
|
│ │ │ └── fake/ |
|
|
│ │ └── meta_test.json |
|
|
│ └── zh-cn/ |
|
|
│ ├── audio/ |
|
|
│ │ ├── real/ |
|
|
│ │ └── fake/ |
|
|
│ └── meta_test.json |
|
|
├── Podcast/ |
|
|
│ ├── audio/ |
|
|
│ │ ├── real/ |
|
|
│ │ └── fake/ |
|
|
│ └── meta_test.json |
|
|
├── PublicFigure/ |
|
|
│ ├── audio/ |
|
|
│ │ ├── real/ |
|
|
│ │ └── fake/ |
|
|
│ └── meta_test.json |
|
|
└── PublicSpeech/ |
|
|
├── audio/ |
|
|
│ ├── real/ |
|
|
│ └── fake/ |
|
|
└── meta_test.json |
|
|
``` |
|
|
|
|
|
### Citation |
|
|
|
|
|
If you use QuadVoxBench in your research, please cite: |
|
|
|
|
|
``` |
|
|
@inproceedings{quadvox2026, |
|
|
title={QuadVox: A Large-Scale Fine-Grained Benchmark with Relative Audio Proximity Test for Robust Audio Deepfake Detection}, |
|
|
author={Ruiming Wang, et al.}, |
|
|
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, |
|
|
year={2026} |
|
|
} |
|
|
``` |