EchoFake / README.md
nccm2p2's picture
Duplicate from EchoFake/EchoFake
a28e577
metadata
license: mit
dataset_info:
  features:
    - name: utt_id
      dtype: string
    - name: path
      dtype: audio
    - name: label
      dtype: string
    - name: source
      dtype: string
    - name: source_text
      dtype: string
    - name: source_speaker_id
      dtype: string
    - name: replay_details
      struct:
        - name: room_size
          dtype: string
        - name: player
          dtype: string
        - name: recorder
          dtype: string
        - name: distance
          dtype: string
    - name: synthesis_details
      struct:
        - name: model
          dtype: string
        - name: reference
          dtype: string
        - name: reference_text
          dtype: string
        - name: reference_speaker_id
          dtype: string
  splits:
    - name: train
      num_bytes: 1881368369.834
      num_examples: 39926
    - name: dev
      num_bytes: 190120550.729
      num_examples: 3973
    - name: closed_set_eval
      num_bytes: 276895281.202
      num_examples: 5991
    - name: open_set_eval
      num_bytes: 1199943251
      num_examples: 25600
  download_size: 3263822188
  dataset_size: 3548327452.7650003
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: dev
        path: data/dev-*
      - split: closed_set_eval
        path: data/closed_set_eval-*
      - split: open_set_eval
        path: data/open_set_eval-*

EchoFake: A Replay-Aware Dataset for Practical Speech Deepfake Detection

Paper link: http://arxiv.org/abs/2510.19414

Code for baseline models is available at https://github.com/EchoFake/EchoFake

Auto-recording tools is available at https://github.com/EchoFake/EchoFake/tree/main/tools

Abstract

The growing prevalence of speech deepfakes has raised serious concerns, particularly in real-world scenarios such as telephone fraud and identity theft. While many anti-spoofing systems have demonstrated promising performance on laboratory-generated synthetic speech, they often fail when confronted with physical replay attacks—a common and low-cost form of attack used in practical settings. Our experiments show that models trained on existing datasets exhibit severe performance degradation, with average accuracy dropping to 59.6% when evaluated on replayed audio. To bridge this gap, we present EchoFake, a comprehensive dataset comprising more than 120 hours of audio from over 13,000 speakers, featuring both cutting-edge zero-shot text-to-speech (TTS) speech and physical replay recordings collected under varied device configurations and real-world environmental settings. Additionally, we evaluate three baseline detection models and show that models trained on EchoFake achieve lower average EERs across datasets, indicating better generalization. By introducing more practical challenges relevant to real-world deployment, EchoFake offers a more realistic foundation for advancing spoofing detection methods.