SpeechFake / README.md
Yixuan's picture
Update README with data preprocessing instructions
e0f335f verified
metadata
license: apache-2.0

SpeechFake Dataset

Please use the download scripts from https://github.com/XIAOYixuan/AUDDT/tree/yixuan-dev to download and process the dataset.

chmod +x download/get_speechfake.sh
./download/get_speechfake.sh

Label Distribution

The dataset is organized into four experiment types:

baseline

  • train_all: 704,862 samples (spoof: 629,154, bonafide: 75,708)
  • train_en: 428,266 samples (spoof: 389,866, bonafide: 38,400)
  • train_zh: 276,596 samples (spoof: 239,288, bonafide: 37,308)
  • test_all: 346,313 samples (spoof: 309,065, bonafide: 37,248)
  • test_en: 208,655 samples (spoof: 189,455, bonafide: 19,200)
  • test_zh: 137,658 samples (spoof: 119,610, bonafide: 18,048)
  • dev_all: 117,463 samples (spoof: 104,845, bonafide: 12,618)
  • dev_en: 71,370 samples (spoof: 64,970, bonafide: 6,400)
  • dev_zh: 46,093 samples (spoof: 39,875, bonafide: 6,218)

cross_generator

  • train_TTS: 276,422 samples (all spoof)
  • train_VC: 192,210 samples (all spoof)
  • train_NV: 160,522 samples (all spoof)
  • test_TTS: 132,733 samples (all spoof)
  • test_VC: 96,059 samples (all spoof)
  • test_NV: 80,273 samples (all spoof)
  • dev_TTS: 46,065 samples (all spoof)
  • dev_VC: 32,031 samples (all spoof)
  • dev_NV: 26,749 samples (all spoof)

cross_lingual

  • train: 263,399 samples (spoof: 203,399, bonafide: 60,000)
  • test: 133,707 samples (spoof: 103,707, bonafide: 30,000)
  • test_ko: 85,228 samples (spoof: 82,728, bonafide: 2,500)
  • test_zh: 63,283 samples (spoof: 48,283, bonafide: 15,000)
  • test_en: 70,424 samples (spoof: 55,424, bonafide: 15,000)
  • test_it: 44,989 samples (spoof: 39,989, bonafide: 5,000)
  • test_hu: 44,981 samples (spoof: 39,981, bonafide: 5,000)
  • test_id: 44,904 samples (spoof: 39,936, bonafide: 4,968)
  • test_es: 48,868 samples (spoof: 43,868, bonafide: 5,000)
  • test_gl: 44,838 samples (spoof: 39,838, bonafide: 5,000)
  • test_lv: 37,784 samples (spoof: 32,845, bonafide: 4,939)
  • test_fi: 36,578 samples (spoof: 31,619, bonafide: 4,959)
  • test_et: 33,350 samples (spoof: 28,392, bonafide: 4,958)
  • test_he: 21,210 samples (spoof: 20,605, bonafide: 605)
  • test_is: 13,388 samples (spoof: 13,373, bonafide: 15)
  • dev: 44,563 samples (spoof: 34,563, bonafide: 10,000)

cross_speaker

  • train: 34,305 samples (spoof: 27,734, bonafide: 6,571)
  • test_overlap2: 18,976 samples (spoof: 12,377, bonafide: 6,599)
  • test_same_spk: 20,470 samples (spoof: 13,871, bonafide: 6,599)
  • test_overlap1: 19,428 samples (spoof: 13,871, bonafide: 5,557)
  • test_diff_spk: 17,934 samples (spoof: 12,377, bonafide: 5,557)

Data Attributes

   ID                                                                     path     label dataset_name
0   0  Real/LibriTTS/train-clean-100/1841/150351/1841_150351_000026_000002.wav  bonafide   SpeechFake
1   1  Real/LibriTTS/train-clean-100/1116/132851/1116_132851_000040_000002.wav  bonafide   SpeechFake
2   2          Real/LibriTTS/train-clean-100/83/9960/83_9960_000022_000000.wav  bonafide   SpeechFake

How to Import

import pandas as pd

# Example: Load baseline train_all split
df = pd.read_parquet("baseline/train_all.parquet")
print(df.head())