Playlogue with SOT Format for Multi-Speaker ASR
Processed version of the Playlogue dataset with Serialized Output Training (SOT) format for multi-speaker automatic speech recognition.
Dataset Description
This dataset contains 158 conversations (~33 hours) of adult-child interactions with:
- Speaker attribution (adult vs child)
- Timestamps for each utterance
- SOT format for end-to-end multi-speaker ASR training
- Audio playback enabled in HuggingFace viewer
Dataset Statistics
| Split | Samples | Duration |
|---|---|---|
| Train | 97 | ~19 hours |
| Validation | 27 | ~6 hours |
| Test | 34 | ~8 hours |
| Total | 158 | ~33 hours |
Corpora Included
- EllisWeismer: Children with/without language delays (play sessions)
- Gleason: Parent-child play interactions
- VanHouten: Examiner-child free play
- Cameron: African American English + Standard American English narratives
Data Format
Each sample contains:
{
'audio': Audio(sampling_rate=16000),
'file_id': str,
'corpus': str, # EllisWeismer, Gleason, VanHouten, Cameron
'participant': str,
'split': str, # train, val, test
'duration': float,
'num_utterances': int,
'num_child_utterances': int,
'num_adult_utterances': int,
'sot_text': str, # SOT format with speaker tags
'utterances_json': str # JSON string with detailed utterances
}
SOT Format
Serialized Output Training (SOT) format embeds speaker information directly in the transcript:
<|startoftranscript|> <|en|> <|transcribe|>
<|0.7|> <adult> alright <|2.1|>
<|2.7|> <child> how this go out <|3.4|>
<|4.1|> <adult> um there we go <|6.0|>
<|endoftranscript|>
Format: <|time|> <speaker> text <|time|>
Usage
Load Dataset
from datasets import load_dataset
# Load dataset
dataset = load_dataset("sulaimank/playlogue")
# Access splits
train = dataset['train']
val = dataset['validation']
test = dataset['test']
# Get sample
sample = train[0]
audio_array = sample['audio']['array']
sampling_rate = sample['audio']['sampling_rate']
sot_text = sample['sot_text']
Filter by Corpus
# Filter EllisWeismer only
ellisweismer = dataset.filter(lambda x: x['corpus'] == 'EllisWeismer')
# Filter Gleason only
gleason = dataset.filter(lambda x: x['corpus'] == 'Gleason')
Filter by Participant
# Get all conversations for participant 'andy'
andy_data = dataset.filter(lambda x: x['participant'] == 'andy')
Training Example
from transformers import WhisperProcessor, WhisperForConditionalGeneration
# Load model
processor = WhisperProcessor.from_pretrained("openai/whisper-small")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
# Extend vocabulary for speaker tokens
new_tokens = ["<adult>", "<child>"]
processor.tokenizer.add_tokens(new_tokens)
model.resize_token_embeddings(len(processor.tokenizer))
# Prepare data
def prepare_dataset(batch):
audio = batch["audio"]
batch["input_features"] = processor.feature_extractor(
audio["array"],
sampling_rate=audio["sampling_rate"]
).input_features[0]
batch["labels"] = processor.tokenizer(batch["sot_text"]).input_ids
return batch
# Train on SOT format
dataset = dataset.map(prepare_dataset, remove_columns=dataset.column_names)
Citation
Original Playlogue dataset:
@article{kalanadhabhatta2024playlogue,
title={Playlogue: Dataset and Benchmarks for Analyzing Adult-Child Conversations During Play},
author={Kalanadhabhatta, Manasa and Rastikerdar, Mohammad Mehdi and Rahman, Tauhidur and Grabell, Adam S. and Ganesan, Deepak},
journal={Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies},
volume={8},
number={4},
year={2024},
publisher={ACM},
url={https://doi.org/10.1145/3699775}
}
License
This dataset follows the same license as the original Playlogue dataset. Users must follow CHILDES ground rules for data usage.
Acknowledgments
- Original Playlogue dataset by Kalanadhabhatta et al.
- CHILDES/TalkBank for source data
- SOT format inspired by multi-speaker ASR research
- Downloads last month
- 6