Datasets:
metadata
task_categories:
- automatic-speech-recognition
language:
- en
- zh
tags:
- speaker-diarization
- meeting-transcription
- bilingual
license: apache-2.0
Dataset Card for Multi-Talker-SD
Dataset Description
Multi-Talker-SD is a large-scale bilingual (English–Mandarin) multi-speaker meeting dataset designed to support research on speaker diarization and meeting transcription.
- Size: 1,000 simulated meetings
- Participants per meeting: 10–30 speakers
- Average duration: ~20 minutes per meeting, up to one hour
- Languages: English, Mandarin (code-switching possible)
- Audio characteristics: realistic speaker overlap, turn-taking patterns, reverberation, and noise injection
- Metadata: speaker gender, language, session type, utterance timing
The audio is synthesized using utterances from AIShell-1 (Mandarin) and LibriSpeech (English), with added noise and reverberation to approximate real meeting conditions.
- Curated by: AISG Speech Lab
- License: Apache-2.0
Dataset Sources
- Repository: GitHub - Multi-Talker-SD
- Dataset on HF Hub: Multi-Talker-SD
Uses
Direct Use
- Research on speaker diarization under multilingual and overlapped speech conditions
- Meeting transcription in bilingual settings
- Controlled experiments on the effects of speaker metadata (gender, language, etc.)
- Training and evaluation of overlap-aware diarization models
Dataset Structure
- Audio files (.wav): multi-speaker simulated meetings
- RTTM files: diarization annotations with speaker labels and timestamps
- Metadata files: speaker profiles including gender, language, and session type
Each example contains:
- Meeting ID
- List of speakers (with attributes)
- Audio waveform
- RTTM segmentation
Dataset Creation
Source Data
- English speech: LibriSpeech
- Mandarin speech: AIShell-1
- Noise sources: point-source and diffuse-field noise corpora
- Processing: audio mixing, reverberation simulation, overlap control
Annotations
- Speaker metadata derived from source datasets (AIShell-1, LibriSpeech)
Personal and Sensitive Information
No personally identifiable or sensitive data is included. All speech is sourced from public corpora.
Bias, Risks, and Limitations
- Synthetic nature: Conversations are generated by concatenating utterances from read speech corpora, which may not fully capture spontaneous conversational dynamics.
- Accent and demographic bias: Limited to AIShell-1 (Mandarin, mostly standard accent) and LibriSpeech (English, US/UK accents), which may not represent broader linguistic diversity.
- Overlap control: Overlap patterns are simulated and may differ from real-world meeting interactions.
Recommendations
Users should be cautious when generalizing results obtained on this dataset to real-world meetings. Complementary evaluation on real conversational datasets is recommended.
Citation
If you use Multi-Talker-SD in your research, please cite:
@misc{multi_talker_sd,
title = {Multi-Talker-SD: Large-Scale Bilingual Meeting Diarization Dataset},
author = {Wu, Yihao and Zheng, Haorui and Zhang, Junjie and Chen, Weiguang and Tran, The Anh and Adnan, Azmat and Rao, Wei and Xionghu, Zhong and Chng, Eng Siong},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/yihao005/Multi-Talker-SD}},
}