language:
- en
license: mit
dataset_info:
features:
- name: id
dtype: string
- name: channel
dtype: string
- name: video_id
dtype: string
- name: video_title
dtype: string
- name: speaker
dtype: string
- name: text
dtype: string
- name: pos_tags
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: start_time
dtype: float64
- name: end_time
dtype: float64
- name: upload_date
dtype: int64
splits:
- name: train
num_bytes: 20378652081.104
num_examples: 756072
download_size: 17813006387
dataset_size: 20378652081.104
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
The YouTube Corpus of Singapore English Podcasts (YCSEP)
The YouTube Corpus of Singapore English Podcasts (YCSEP) contains ASR transcripts and audio from 620 hours of over 1,300 podcast episodes by Singapore-based content creators, comprising 756k individual turns and 8.38 million word tokens.
Dataset Description
YCSEP was created using a pipeline comprising yt-dlp, WhisperX, and pyannote.audio, and is intended to advance the study of the linguistic and discourse properties of Singapore English.
- Curated by: Steven Coats, Carmelo Alessandro Basile, Cameron Morin, Robert Fuchs
- Funded by: European Union – NextGenerationEU instrument/Research Council of Finland grant number 358720
- Language(s) (NLP): English
- License: MIT
Dataset Sources
- Static version (transcripts only): Harvard Dataverse
- Paper: Coats, Steven, Carmelo Alessandro Basile, Cameron Morin, and Robert Fuchs. Forthcoming. The YouTube Corpus of Singapore English Podcasts. English World-wide.
- Search site: YCSEP
Potential Uses
Corpus-linguistic analysis of lexis, grammar, and interaction in Singapore English and Singapore-related discourse.
Direct Use
The dataset is intended for direct use in research on Singapore English and related discourse. Use cases include syntactic, lexical, and pragmatic analysis, as well as speech processing, sociolinguistic variation studies, and corpus-based natural language processing tasks.
Access to this dataset is granted for non-commercial research and educational purposes only, in line with Article 4 of the EU Directive on Copyright in the Digital Single Market (EU DSM Directive), Fair Use provisions under US copyright law (17 U.S. Code § 107), and Fair Use provisions under Singapore’s Copyright Act 2021.
By requesting access, users affirm that their use complies with these principles and that the dataset will only be used for non-commercial research or teaching purposes. Proper citation of the dataset and related publications must be included in all outputs. If you are unsure whether your use qualifies, please contact the curator: @stcoats
Out-of-Scope Use
As the transcripts are from the output of Whisper's large-v3 model, the dataset is not appropriate for training commercial ASR systems or TTS models. Not suitable for applications requiring speaker consent or fine-grained demographic metadata, as this information is not uniformly available.
Dataset Structure
Each row in the dataset represents a speaker turn, with the following fields:
id: Unique utterance identifier
channel: YouTube channel name
video_id: YouTube video identifier
video_title: Title of the podcast episode
text: Transcript of the utterance
pos_tags: Universal POS tags from spaCy
audio: Audio segment (MP3 or waveform)
start_time: Start time in the video (in seconds)
end_time: End time in the video (in seconds)
upload_date: Date of upload (YYYY-MM-DD)
Dataset Creation
Early 2025
Curation Rationale
Singapore English is underrepresented in publicly available corpora. This dataset fills a gap by providing a large, naturalistic, and contemporary corpus of spoken Singapore English in a media format. It was curated to support both linguistic research and machine learning tasks involving regional varieties of English.
Source Data
The data was created from podcast recordings by Singapore-based podcasters. See the linked paper for more details.
Annotations
Annotations include a unique identifier for each utterance (id), the YouTube channel, video identifier, and video title for the utterance (channel, video_id, video_title), the transcribed speech (text), the corresponding part-of-speech tags (pos_tags), the audio (audio), start and end times for the utterance in the corresponding video (start_time, end_time), and the date the podcast was uploaded to YouTube (upload_date).
Annotation process
Transcriptions were generated automatically using WhisperX with word-level alignment. POS tags were applied using Spacy's en_core_web_sm model. No manual correction was performed due to the scale of the data, but filtering and consistency checks were applied to remove low-confidence segments.
Who are the annotators?
The annotations were automatically generated using pre-trained machine learning models (WhisperX for ASR/alignment, pyannote for diarization, and spaCy for POS tagging). No manual annotators were involved.
Personal and Sensitive Information
The dataset may contain incidental personal information disclosed during podcast episodes, but it does not include any additional metadata about speakers. Since this is derived from public YouTube content, it falls under fair use for research, but researchers should be mindful of ethical implications and avoid extracting or redistributing sensitive content.
Bias, Risks, and Limitations
Bias: The speakers are self-selected public figures or podcast hosts, and may not reflect broader Singaporean demographics.
Limitations: Automatic transcription may misrepresent speech, particularly with code-switching or non-standard pronunciation.
Risk: The dataset includes personal speech, so users should avoid downstream applications that attempt re-identification or speaker profiling.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
Dataset:
BibTeX:
@misc{coats2025ycsep,
author = {Coats, Steven and Basile, Carmelo Alessandro and Morin, Cameron and Fuchs, Robert},
title = {The YouTube Corpus of Singapore English Podcasts (YCSEP)},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/stcoats/YCSEP_v1}},
note = {Dataset}
}
APA:
Coats, S., Basile, C. A., Morin, C., & Fuchs, R. (2025). The YouTube Corpus of Singapore English Podcasts (YCSEP) [Dataset]. Hugging Face. https://huggingface.co/datasets/stcoats/YCSEP_v1
Paper:
BibTeX:
@article{coats2024eww,
author = {Coats, Steven and Basile, Carmelo Alessandro and Morin, Cameron and Fuchs, Robert},
title = {The YouTube Corpus of Singapore English Podcasts},
journal = {English World-Wide},
year = {forthcoming}
}
APA:
Coats, S., Basile, C. A., Morin, C., & Fuchs, R. (forthcoming). The YouTube Corpus of Singapore English Podcasts. English World-Wide.