Add task category, license, and links to paper and code
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,56 +1,57 @@
|
|
| 1 |
-
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
|
| 4 |
-
|
| 5 |
|
| 6 |
-
|
| 7 |
|
| 8 |
-
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
-
|
|
|
|
| 13 |
|
| 14 |
-
|
| 15 |
|
| 16 |
-
|
| 17 |
-
```
|
| 18 |
-
@article{Ciobanu2025xmad,
|
| 19 |
-
title="{XMAD-Bench: Cross-Domain Multilingual Audio Deepfake Benchmark}",
|
| 20 |
-
author={Ciobanu, Ioan-Paul and Hiji, Andrei-Iulian and Ristea, Nicolae-Catalin and Irofti, Paul and Rusu, Cristian and Ionescu, Radu Tudor},
|
| 21 |
-
journal={arXiv preprint arXiv:2506.00462},
|
| 22 |
-
year={2025}
|
| 23 |
-
}
|
| 24 |
-
```
|
| 25 |
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
alleviate this issue, with many recent studies reporting accuracy rates close to $99\%$. However, these methods are typically tested in an in-domain setup, where the deepfake samples from the training and test sets
|
| 29 |
-
are produced by the same generative models. To this end, we introduce XMAD-Bench, a large-scale cross-domain multilingual audio deepfake benchmark comprising 668.8 hours of real and deepfake speech. In our novel dataset,
|
| 30 |
-
the speakers, the generative methods, and the real audio sources are distinct across training and test splits. This leads to a challenging cross-domain evaluation setup, where audio deepfake detectors can be tested in the wild.
|
| 31 |
-
Our in-domain and cross-domain experiments indicate a clear disparity between the in-domain performance of deepfake detectors, which is usually as high as $100\%$, and the cross-domain performance of the same models, which is sometimes
|
| 32 |
-
similar to random chance. Our benchmark highlights the need for the development of robust audio deepfake detectors, which maintain their generalization capacity across different languages, speakers, generative methods, and data sources.
|
| 33 |
-
|
| 34 |
-
Split statistics on our data set:
|
| 35 |

|
| 36 |

|
| 37 |
|
| 38 |
-
|
|
|
|
| 39 |

|
| 40 |
|
|
|
|
|
|
|
| 41 |
|
| 42 |
-
##
|
| 43 |
-
Our data is available at: https://drive.google.com/drive/folders/1PjboiIGjNWU6UeuIHrZu3ofF70o0A5-X?usp=drive_link
|
| 44 |
-
|
| 45 |
|
| 46 |
-
|
| 47 |
-
Modify the detection/config.json with the desired locations. Then run:
|
| 48 |
```bash
|
| 49 |
python detection/main.py
|
| 50 |
```
|
| 51 |
|
| 52 |
-
|
| 53 |
-
|
| 54 |
```bash
|
| 55 |
python demo_script.py \
|
| 56 |
--sentence "Generarea unui exemplu de test a reusit." \
|
|
@@ -58,159 +59,44 @@ python demo_script.py \
|
|
| 58 |
--output synthesized_sample.wav
|
| 59 |
```
|
| 60 |
|
| 61 |
-
##
|
| 62 |
-
|
| 63 |
-
#### VITS + FreeVC
|
| 64 |
-
|
| 65 |
-
```python
|
| 66 |
-
# in vits_freevc.py you need to modify the model to:
|
| 67 |
-
vc_freevc = TTS("voice_conversion_models/multilingual/vctk/freevc24")
|
| 68 |
-
model_name = "tts_models/ro/cv/vits"
|
| 69 |
-
```
|
| 70 |
-
|
| 71 |
-
#### VITS + KNN-VC
|
| 72 |
-
|
| 73 |
-
```python
|
| 74 |
-
# in vits_knnvc.py you need to modify the model to:
|
| 75 |
-
knn_vc = torch.hub.load('bshall/knn-vc', 'knn_vc', prematched=True, trust_repo=True, pretrained=True)
|
| 76 |
-
model_name = "tts_models/ro/cv/vits"
|
| 77 |
-
```
|
| 78 |
-
|
| 79 |
-
#### VITS + OpenVoice
|
| 80 |
-
|
| 81 |
-
```python
|
| 82 |
-
# in vits_openvoice.py you need to modify the model to:
|
| 83 |
-
vc_openvoice = TTS("voice_conversion_models/multilingual/multi-dataset/openvoice_v2")
|
| 84 |
-
model_name = "tts_models/ro/cv/vits"
|
| 85 |
-
```
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
## Arabic datasets generation
|
| 89 |
-
|
| 90 |
-
#### fairseq + FreeVC
|
| 91 |
-
|
| 92 |
-
```python
|
| 93 |
-
# in fairseq_freevc.py you need to modify the model to:
|
| 94 |
-
vc_freevc = TTS("voice_conversion_models/multilingual/vctk/freevc24")
|
| 95 |
-
model_name = "tts_models/ara/fairseq/vits"
|
| 96 |
-
```
|
| 97 |
-
|
| 98 |
-
#### fairseq + KNN-VC
|
| 99 |
|
|
|
|
| 100 |
```python
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
```
|
| 105 |
|
| 106 |
-
|
| 107 |
-
#### XTTSv2
|
| 108 |
-
|
| 109 |
```python
|
| 110 |
-
|
| 111 |
-
|
| 112 |
```
|
| 113 |
|
| 114 |
-
|
| 115 |
-
## Russian datasets generation
|
| 116 |
-
|
| 117 |
-
#### VITS + KNN-VC
|
| 118 |
-
|
| 119 |
```python
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
```
|
| 124 |
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
```python
|
| 128 |
-
# in vits_openvoice.py you need to modify the model to:
|
| 129 |
-
vc_openvoice = TTS("voice_conversion_models/multilingual/multi-dataset/openvoice_v2")
|
| 130 |
-
model_name = "tts_models/rus/fairseq/vits"
|
| 131 |
-
```
|
| 132 |
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
```python
|
| 136 |
-
# in xttsv2.py you need to modify the model to:
|
| 137 |
-
language = "ru"
|
| 138 |
-
```
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
## English datasets generation
|
| 142 |
-
|
| 143 |
-
#### VITS + KNN-VC
|
| 144 |
-
|
| 145 |
-
```python
|
| 146 |
-
# in vits_knnvc.py you need to modify the model to:
|
| 147 |
-
knn_vc = torch.hub.load('bshall/knn-vc', 'knn_vc', prematched=True, trust_repo=True, pretrained=True)
|
| 148 |
-
model_name = "tts_models/en/ljspeech/vits"
|
| 149 |
-
```
|
| 150 |
-
|
| 151 |
-
#### VITS + OpenVoice
|
| 152 |
-
|
| 153 |
-
```python
|
| 154 |
-
# in vits_openvoice.py you need to modify the model to:
|
| 155 |
-
vc_openvoice = TTS("voice_conversion_models/multilingual/multi-dataset/openvoice_v2")
|
| 156 |
-
model_name = "tts_models/eng/fairseq/vits"
|
| 157 |
-
```
|
| 158 |
-
|
| 159 |
-
#### XTTSv2
|
| 160 |
-
|
| 161 |
-
```python
|
| 162 |
-
# in xttsv2.py you need to modify the model to:
|
| 163 |
-
language = "en"
|
| 164 |
-
```
|
| 165 |
-
|
| 166 |
-
## German datasets generation
|
| 167 |
-
|
| 168 |
-
#### VITS + KNN-VC
|
| 169 |
-
|
| 170 |
-
```python
|
| 171 |
-
# in vits_knnvc.py you need to modify the model to:
|
| 172 |
-
knn_vc = torch.hub.load('voice_conversion_models/multilingual/multi-dataset/knnvc', 'knn_vc', prematched=True, trust_repo=True, pretrained=True)
|
| 173 |
-
model_name = "tts_models/de/css10/vits-neon"
|
| 174 |
-
```
|
| 175 |
-
|
| 176 |
-
#### XTTSv2
|
| 177 |
-
|
| 178 |
-
```python
|
| 179 |
-
# in xttsv2.py you need to modify the model to:
|
| 180 |
-
language = "de"
|
| 181 |
-
```
|
| 182 |
-
|
| 183 |
-
## Spanish datasets generation
|
| 184 |
-
|
| 185 |
-
#### VITS + OpenVoice
|
| 186 |
-
|
| 187 |
-
```python
|
| 188 |
-
# in vits_openvoice.py you need to modify the model to:
|
| 189 |
-
vc_openvoice = TTS("voice_conversion_models/multilingual/multi-dataset/openvoice_v2")
|
| 190 |
-
model_name = "tts_models/spa/fairseq/vits"
|
| 191 |
-
```
|
| 192 |
-
|
| 193 |
-
#### XTTSv2
|
| 194 |
-
|
| 195 |
-
```python
|
| 196 |
-
# in xttsv2.py you need to modify the model to:
|
| 197 |
-
language = "es"
|
| 198 |
-
```
|
| 199 |
-
|
| 200 |
-
## Mandarin datasets generation
|
| 201 |
|
| 202 |
-
|
| 203 |
|
| 204 |
-
```
|
| 205 |
-
|
| 206 |
-
|
| 207 |
-
|
|
|
|
|
|
|
|
|
|
| 208 |
```
|
| 209 |
|
| 210 |
-
|
| 211 |
|
| 212 |
-
|
| 213 |
-
# in vits_freevc.py you need to modify the model to:
|
| 214 |
-
vc_freevc = TTS("voice_conversion_models/multilingual/vctk/freevc24")
|
| 215 |
-
model_name = "tts_models/multilingual/multi-dataset/bark"
|
| 216 |
-
```
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-sa-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- audio-classification
|
| 5 |
+
language:
|
| 6 |
+
- ro
|
| 7 |
+
- ar
|
| 8 |
+
- ru
|
| 9 |
+
- en
|
| 10 |
+
- de
|
| 11 |
+
- es
|
| 12 |
+
- zh
|
| 13 |
+
tags:
|
| 14 |
+
- audio-deepfake
|
| 15 |
+
- deepfake-detection
|
| 16 |
+
- cross-domain
|
| 17 |
+
---
|
| 18 |
|
| 19 |
+
# XMAD-Bench: Cross-Domain Multilingual Audio Deepfake Benchmark
|
| 20 |
|
| 21 |
+
[**Paper**](https://huggingface.co/papers/2506.00462) | [**GitHub**](https://github.com/ristea/xmad-bench)
|
| 22 |
|
| 23 |
+
### by Ioan-Paul Ciobanu, Andrei-Iulian Hiji, Nicolae-Catalin Ristea, Paul Irofti, Cristian Rusu, Radu Tudor Ionescu
|
| 24 |
|
| 25 |
+
-----------------------------------------
|
| 26 |
|
| 27 |
+
## Description
|
| 28 |
+
Recent advances in audio generation led to an increasing number of deepfakes, making the general public more vulnerable to financial scams, identity theft, and misinformation. Audio deepfake detectors promise to alleviate this issue, with many recent studies reporting accuracy rates close to 99%. However, these methods are typically tested in an in-domain setup, where the deepfake samples from the training and test sets are produced by the same generative models.
|
| 29 |
|
| 30 |
+
To this end, we introduce **XMAD-Bench**, a large-scale cross-domain multilingual audio deepfake benchmark comprising 668.8 hours of real and deepfake speech. In our novel dataset, the speakers, the generative methods, and the real audio sources are distinct across training and test splits. This leads to a challenging cross-domain evaluation setup, where audio deepfake detectors can be tested "in the wild".
|
| 31 |
|
| 32 |
+
Our benchmark highlights the need for the development of robust audio deepfake detectors, which maintain their generalization capacity across different languages, speakers, generative methods, and data sources.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
+
### Dataset Statistics
|
| 35 |
+
Split statistics on our dataset:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |

|
| 37 |

|
| 38 |
|
| 39 |
+
### Benchmarking Results
|
| 40 |
+
Results obtained with various state-of-the-art methods on our dataset:
|
| 41 |

|
| 42 |
|
| 43 |
+
## Download Data
|
| 44 |
+
The raw data is available via Google Drive: [Download Link](https://drive.google.com/drive/folders/1PjboiIGjNWU6UeuIHrZu3ofF70o0A5-X?usp=drive_link)
|
| 45 |
|
| 46 |
+
## Usage
|
|
|
|
|
|
|
| 47 |
|
| 48 |
+
### Detection Framework
|
| 49 |
+
Modify the `detection/config.json` with the desired locations. Then run:
|
| 50 |
```bash
|
| 51 |
python detection/main.py
|
| 52 |
```
|
| 53 |
|
| 54 |
+
### Demo Generation Script
|
|
|
|
| 55 |
```bash
|
| 56 |
python demo_script.py \
|
| 57 |
--sentence "Generarea unui exemplu de test a reusit." \
|
|
|
|
| 59 |
--output synthesized_sample.wav
|
| 60 |
```
|
| 61 |
|
| 62 |
+
## Dataset Generation Examples
|
| 63 |
+
The benchmark utilizes several TTS and Voice Conversion models across different languages. Below are configuration examples found in the repository:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
|
| 65 |
+
**Romanian (VITS + FreeVC)**
|
| 66 |
```python
|
| 67 |
+
# in vits_freevc.py
|
| 68 |
+
vc_freevc = TTS("voice_conversion_models/multilingual/vctk/freevc24")
|
| 69 |
+
model_name = "tts_models/ro/cv/vits"
|
| 70 |
```
|
| 71 |
|
| 72 |
+
**English (XTTSv2)**
|
|
|
|
|
|
|
| 73 |
```python
|
| 74 |
+
# in xttsv2.py
|
| 75 |
+
language = "en"
|
| 76 |
```
|
| 77 |
|
| 78 |
+
**Mandarin (Tacotron + KNNVC)**
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
```python
|
| 80 |
+
# in vits_knnvc.py
|
| 81 |
+
knn_vc = torch.hub.load('bshall/knn-vc', 'knn_vc', prematched=True, trust_repo=True, pretrained=True)
|
| 82 |
+
model_name = "tts_models/zh-CN/baker/tacotron2-DDC-GST"
|
| 83 |
```
|
| 84 |
|
| 85 |
+
Refer to the [GitHub repository](https://github.com/ristea/xmad-bench) for specific generation scripts for Arabic, Russian, German, and Spanish.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
+
## Reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 88 |
|
| 89 |
+
If you use this dataset or code in your research, please cite the corresponding paper:
|
| 90 |
|
| 91 |
+
```bibtex
|
| 92 |
+
@article{Ciobanu2025xmad,
|
| 93 |
+
title="{XMAD-Bench: Cross-Domain Multilingual Audio Deepfake Benchmark}",
|
| 94 |
+
author={Ciobanu, Ioan-Paul and Hiji, Andrei-Iulian and Ristea, Nicolae-Catalin and Irofti, Paul and Rusu, Cristian and Ionescu, Radu Tudor},
|
| 95 |
+
journal={arXiv preprint arXiv:2506.00462},
|
| 96 |
+
year={2025}
|
| 97 |
+
}
|
| 98 |
```
|
| 99 |
|
| 100 |
+
## License
|
| 101 |
|
| 102 |
+
The source code and models are released under the Creative Common Attribution-NonCommercial-ShareAlike 4.0 International ([CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)) license.
|
|
|
|
|
|
|
|
|
|
|
|