|
|
--- |
|
|
license: apache-2.0 |
|
|
--- |
|
|
# ARFAKE: A Multi-Dialect Benchmark and Baselines for Arabic Spoof-Speech Detection |
|
|
|
|
|
## Overview |
|
|
ARFAKE is the **first multi-dialect Arabic spoof-speech benchmark**, designed to evaluate and advance anti-spoofing systems for Arabic audio. With the rapid progress of generative text-to-speech (TTS) and voice-cloning models, distinguishing between real and synthetic speech has become increasingly challenging, especially for Arabic and its diverse dialects — a language family that has been underrepresented in previous deepfake detection . |
|
|
|
|
|
This repository provides: |
|
|
- The **ARFAKE dataset**, built on top of the **Casablanca speech corpus** (8 dialects, ~6 hours each). |
|
|
- **Spoofed versions** generated using state-of-the-art TTS systems: |
|
|
- XTTS-v2 |
|
|
- FishSpeech |
|
|
- ArTST |
|
|
- VITS |
|
|
- **Baselines and evaluation pipeline** for detecting spoofed speech using both traditional ML and modern embedding-based models. |
|
|
|
|
|
--- |
|
|
|
|
|
## Key Features |
|
|
- 📀 **Multi-dialect coverage**: Eight Arabic dialects, balanced across bonafide and spoofed samples. |
|
|
- 🎙️ **Spoofed data generation**: Using large-scale multilingual and Arabic-specific TTS models. |
|
|
- 🧪 **Detection baselines**: |
|
|
- MFCC + classical ML classifiers (SVM, Random Forest, etc.) |
|
|
- Embedding-based models using **HuBERT**, **Whisper**, and **Wav2Vec 2.0** |
|
|
- **RawNet2**, the ASVspoof benchmark system |
|
|
- 🔍 **Evaluation metrics**: |
|
|
- **Equal Error Rate (EER)** |
|
|
- **Accuracy** |
|
|
- **Mean Opinion Score (MOS)** (via human ratings) |
|
|
- **Word Error Rate (WER)** (via Whisper-Large ASR) |
|
|
|
|
|
--- |
|
|
|
|
|
## Dataset |
|
|
- **Source corpus**: [Casablanca dataset (2024)] |
|
|
- **Size**: 54,413 utterances (~23k test samples, ~31k train samples) |
|
|
- **Composition**: |
|
|
- Bonafide (genuine) speech |
|
|
- Spoofed speech from FishSpeech, XTTS-v2, and ArTST |
|
|
- **Dialectal coverage**: DZ, EG, JO, MA, MR, PS, AE, YE (ISO 3166-1 alpha-2 codes) |
|
|
- **Distribution**: (*see Figure 1 in paper*). |
|
|
|
|
|
--- |
|
|
|
|
|
## Baselines & Results |
|
|
- **Embedding-based models** outperform traditional MFCC-based ML classifiers. |
|
|
- **Whisper-large** achieved the best detection performance (EER 6.92% on FishSpeech-generated data). |
|
|
- **FishSpeech** produced the most challenging spoofed samples, with the highest MOS (3.72/5) and lowest WER, making it harder to detect than XTTS-v2, ArTST, or VITS. |
|
|
- Classifiers trained on the combined dataset generalized well even to unseen TTS models like VITS. |
|
|
|
|
|
**Summary of Findings**: |
|
|
- FishSpeech is the most realistic and difficult TTS system for Arabic spoofing. |
|
|
- Combining spoofed data from multiple TTS models improves generalizability of detectors. |
|
|
- Whisper-based detectors outperform MFCC-based baselines by a wide margin. |
|
|
|
|
|
--- |
|
|
|
|
|
## Usage |
|
|
1. **Dataset Access** |
|
|
We uploaded the dataset, you can find use merge_training_set to train your model and merge_test_set (in-domain) ,Vits-spoofed (out-domain). |
|
|
|
|
|
2. **Training Baseline Models** |
|
|
- Classical ML: Train SVM, Random Forest, etc. on MFCC features. |
|
|
- Embedding-based: Use pre-trained HuBERT / Whisper / Wav2Vec encoders with classifier heads. |
|
|
- Benchmark comparison with RawNet2. |
|
|
|
|
|
3. **Evaluation** |
|
|
- Run detection and report **EER**, **Accuracy**, **MOS**, and **WER**. |
|
|
- Use Whisper-Large for ASR-based evaluation. |
|
|
|
|
|
--- |
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@misc{maged2025arfakemultidialectbenchmarkbaselines, |
|
|
title={ArFake: A Multi-Dialect Benchmark and Baselines for Arabic Spoof-Speech Detection}, |
|
|
author={Mohamed Maged and Alhassan Ehab and Ali Mekky and Besher Hassan and Shady Shehata}, |
|
|
year={2025}, |
|
|
eprint={2509.22808}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL}, |
|
|
url={https://arxiv.org/abs/2509.22808}, |
|
|
} |
|
|
|