File size: 6,969 Bytes
9d50292 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 | # XMAD-Bench: Cross-Domain Multilingual Audio Deepfake Benchmark
### by Ioan-Paul Ciobanu, Andrei-Iulian Hiji, Nicolae-Catalin Ristea, Paul Irofti, Cristian Rusu, Radu Tudor Ionescu
-----------------------------------------
## License
The source code and models are released under the Creative Common Attribution-NonCommercial-ShareAlike 4.0 International ([CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)) license.
## Reference
If you use this dataset or code in your research, please cite the corresponding paper:
1. Ioan-Paul Ciobanu, Andrei-Iulian Hiji, Nicolae-Catalin Ristea, Paul Irofti, Cristian Rusu, Radu Tudor Ionescu. XMAD-Bench: Cross-Domain Multilingual Audio Deepfake Benchmark. arXiv preprint arXiv:2506.00462 (2025).
Bibtex:
```
@article{Ciobanu2025xmad,
title="{XMAD-Bench: Cross-Domain Multilingual Audio Deepfake Benchmark}",
author={Ciobanu, Ioan-Paul and Hiji, Andrei-Iulian and Ristea, Nicolae-Catalin and Irofti, Paul and Rusu, Cristian and Ionescu, Radu Tudor},
journal={arXiv preprint arXiv:2506.00462},
year={2025}
}
```
## Description
Recent advances in audio generation led to an increasing number of deepfakes, making the general public more vulnerable to financial scams, identity theft, and misinformation. Audio deepfake detectors promise to
alleviate this issue, with many recent studies reporting accuracy rates close to $99\%$. However, these methods are typically tested in an in-domain setup, where the deepfake samples from the training and test sets
are produced by the same generative models. To this end, we introduce XMAD-Bench, a large-scale cross-domain multilingual audio deepfake benchmark comprising 668.8 hours of real and deepfake speech. In our novel dataset,
the speakers, the generative methods, and the real audio sources are distinct across training and test splits. This leads to a challenging cross-domain evaluation setup, where audio deepfake detectors can be tested in the wild.
Our in-domain and cross-domain experiments indicate a clear disparity between the in-domain performance of deepfake detectors, which is usually as high as $100\%$, and the cross-domain performance of the same models, which is sometimes
similar to random chance. Our benchmark highlights the need for the development of robust audio deepfake detectors, which maintain their generalization capacity across different languages, speakers, generative methods, and data sources.
Split statistics on our data set:


Results obtained with various state-of-the-art methods on our data set:

## Download data
Our data is available at: https://drive.google.com/drive/folders/1PjboiIGjNWU6UeuIHrZu3ofF70o0A5-X?usp=drive_link
## Detection framework
Modify the detection/config.json with the desired locations. Then run:
```bash
python detection/main.py
```
# Demo generation script
```bash
python demo_script.py \
--sentence "Generarea unui exemplu de test a reusit." \
--refs ref1.wav ref2.wav ref3.wav ... \
--output synthesized_sample.wav
```
## Romanian datasets generation
#### VITS + FreeVC
```python
# in vits_freevc.py you need to modify the model to:
vc_freevc = TTS("voice_conversion_models/multilingual/vctk/freevc24")
model_name = "tts_models/ro/cv/vits"
```
#### VITS + KNN-VC
```python
# in vits_knnvc.py you need to modify the model to:
knn_vc = torch.hub.load('bshall/knn-vc', 'knn_vc', prematched=True, trust_repo=True, pretrained=True)
model_name = "tts_models/ro/cv/vits"
```
#### VITS + OpenVoice
```python
# in vits_openvoice.py you need to modify the model to:
vc_openvoice = TTS("voice_conversion_models/multilingual/multi-dataset/openvoice_v2")
model_name = "tts_models/ro/cv/vits"
```
## Arabic datasets generation
#### fairseq + FreeVC
```python
# in fairseq_freevc.py you need to modify the model to:
vc_freevc = TTS("voice_conversion_models/multilingual/vctk/freevc24")
model_name = "tts_models/ara/fairseq/vits"
```
#### fairseq + KNN-VC
```python
# in fairseq_knnvc.py you need to modify the model to:
knn_vc = torch.hub.load('bshall/knn-vc', 'knn_vc', prematched=True, trust_repo=True, pretrained=True)
model_name = "tts_models/ara/fairseq/vits"
```
#### XTTSv2
```python
# in xttsv2.py you need to modify the model to:
language = "ar"
```
## Russian datasets generation
#### VITS + KNN-VC
```python
# in vits_knnvc.py you need to modify the model to:
knn_vc = torch.hub.load('bshall/knn-vc', 'knn_vc', prematched=True, trust_repo=True, pretrained=True)
model_name = "tts_models/rus/fairseq/vits"
```
#### VITS + OpenVoice
```python
# in vits_openvoice.py you need to modify the model to:
vc_openvoice = TTS("voice_conversion_models/multilingual/multi-dataset/openvoice_v2")
model_name = "tts_models/rus/fairseq/vits"
```
#### XTTSv2
```python
# in xttsv2.py you need to modify the model to:
language = "ru"
```
## English datasets generation
#### VITS + KNN-VC
```python
# in vits_knnvc.py you need to modify the model to:
knn_vc = torch.hub.load('bshall/knn-vc', 'knn_vc', prematched=True, trust_repo=True, pretrained=True)
model_name = "tts_models/en/ljspeech/vits"
```
#### VITS + OpenVoice
```python
# in vits_openvoice.py you need to modify the model to:
vc_openvoice = TTS("voice_conversion_models/multilingual/multi-dataset/openvoice_v2")
model_name = "tts_models/eng/fairseq/vits"
```
#### XTTSv2
```python
# in xttsv2.py you need to modify the model to:
language = "en"
```
## German datasets generation
#### VITS + KNN-VC
```python
# in vits_knnvc.py you need to modify the model to:
knn_vc = torch.hub.load('voice_conversion_models/multilingual/multi-dataset/knnvc', 'knn_vc', prematched=True, trust_repo=True, pretrained=True)
model_name = "tts_models/de/css10/vits-neon"
```
#### XTTSv2
```python
# in xttsv2.py you need to modify the model to:
language = "de"
```
## Spanish datasets generation
#### VITS + OpenVoice
```python
# in vits_openvoice.py you need to modify the model to:
vc_openvoice = TTS("voice_conversion_models/multilingual/multi-dataset/openvoice_v2")
model_name = "tts_models/spa/fairseq/vits"
```
#### XTTSv2
```python
# in xttsv2.py you need to modify the model to:
language = "es"
```
## Mandarin datasets generation
#### Tacotron + KNNVC
```python
# in vits_knnvc.py you need to modify the model to:
knn_vc = torch.hub.load('bshall/knn-vc', 'knn_vc', prematched=True, trust_repo=True, pretrained=True)
model_name = "tts_models/zh-CN/baker/tacotron2-DDC-GST"
```
#### Bark + FreeVC
```python
# in vits_freevc.py you need to modify the model to:
vc_freevc = TTS("voice_conversion_models/multilingual/vctk/freevc24")
model_name = "tts_models/multilingual/multi-dataset/bark"
``` |