Multi-Talker-SD / README.md
yihao005's picture
Update README.md
be2d372 verified
metadata
task_categories:
  - automatic-speech-recognition
language:
  - en
  - zh
tags:
  - speaker-diarization
  - meeting-transcription
  - bilingual
license: apache-2.0

Dataset Card for Multi-Talker-SD

Dataset Description

Multi-Talker-SD is a large-scale bilingual (English–Mandarin) multi-speaker meeting dataset designed to support research on speaker diarization and meeting transcription.

  • Size: 1,000 simulated meetings
  • Participants per meeting: 10–30 speakers
  • Average duration: ~20 minutes per meeting, up to one hour
  • Languages: English, Mandarin (code-switching possible)
  • Audio characteristics: realistic speaker overlap, turn-taking patterns, reverberation, and noise injection
  • Metadata: speaker gender, language, session type, utterance timing

The audio is synthesized using utterances from AIShell-1 (Mandarin) and LibriSpeech (English), with added noise and reverberation to approximate real meeting conditions.

  • Curated by: AISG Speech Lab
  • License: Apache-2.0

Dataset Sources

Direct Use

  • Research on speaker diarization under multilingual and overlapped speech conditions
  • Meeting transcription in bilingual settings
  • Controlled experiments on the effects of speaker metadata (gender, language, etc.)
  • Training and evaluation of overlap-aware diarization models

Dataset Structure

  • Audio files (.wav): multi-speaker simulated meetings
  • RTTM files: diarization annotations with speaker labels and timestamps
  • Metadata files: speaker profiles including gender, language, and session type

Each example contains:

  • Meeting ID
  • List of speakers (with attributes)
  • Audio waveform
  • RTTM segmentation

Dataset Creation

Source Data

  • English speech: LibriSpeech
  • Mandarin speech: AIShell-1
  • Noise sources: point-source and diffuse-field noise corpora
  • Processing: audio mixing, reverberation simulation, overlap control

Personal and Sensitive Information

No personally identifiable or sensitive data is included. All speech is sourced from public corpora.