File size: 7,880 Bytes
f5d8945
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
04f52f9
 
 
 
 
f5d8945
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
---
language:
- en
license: mit
dataset_info:
  features:
  - name: id
    dtype: string
  - name: channel
    dtype: string
  - name: video_id
    dtype: string
  - name: video_title
    dtype: string
  - name: speaker
    dtype: string
  - name: text
    dtype: string
  - name: pos_tags
    dtype: string
  - name: audio
    dtype:
      audio:
        sampling_rate: 16000
  - name: start_time
    dtype: float64
  - name: end_time
    dtype: float64
  - name: upload_date
    dtype: int64
  splits:
  - name: train
    num_bytes: 20378652081.104
    num_examples: 756072
  download_size: 17813006387
  dataset_size: 20378652081.104
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# The YouTube Corpus of Singapore English Podcasts (YCSEP)


The YouTube Corpus of Singapore English Podcasts (YCSEP) contains ASR transcripts and audio from 620 hours of over 1,300 podcast episodes by Singapore-based content creators, comprising 756k individual turns and 8.38 million word tokens. 

### Dataset Description

YCSEP was created using a pipeline comprising yt-dlp, WhisperX, and pyannote.audio, and is intended to advance the study of the linguistic and discourse properties of Singapore English.

- **Curated by:** Steven Coats, Carmelo Alessandro Basile, Cameron Morin, Robert Fuchs
- **Funded by:** European Union – NextGenerationEU instrument/Research Council of Finland grant number 358720
- **Language(s) (NLP):** English
- **License:** MIT

### Dataset Sources

- **Static version (transcripts only):** [Harvard Dataverse](https://doi.org/10.7910/DVN/B7JRID)
- **Paper:** Coats, Steven, Carmelo Alessandro Basile, Cameron Morin, and Robert Fuchs. Forthcoming. The YouTube Corpus of Singapore English Podcasts. *English World-wide*.
- **Search site:** [YCSEP](https://ycsep.corpora.li)

## Potential Uses

Corpus-linguistic analysis of lexis, grammar, and interaction in Singapore English and Singapore-related discourse.

### Direct Use

The dataset is intended for direct use in research on Singapore English and related discourse. Use cases include syntactic, lexical, and pragmatic analysis, as well as speech processing, sociolinguistic variation studies, and corpus-based natural language processing tasks.

Access to this dataset is granted for non-commercial research and educational purposes only, in line with Article 4 of the EU Directive on Copyright in the Digital Single Market (EU DSM Directive), 
Fair Use provisions under US copyright law (17 U.S. Code § 107), and Fair Use provisions under Singapore’s Copyright Act 2021.

By requesting access, users affirm that their use complies with these principles and that the dataset will only be used for non-commercial research or teaching purposes. Proper citation of the dataset and related publications must be included in all outputs. If you are unsure whether your use qualifies, please contact the curator: [@stcoats](https://huggingface.co/stcoats)

### Out-of-Scope Use

As the transcripts are from the output of Whisper's large-v3 model, the dataset is not appropriate for training commercial ASR systems or TTS models. Not suitable for applications requiring speaker consent or fine-grained demographic metadata, as this information is not uniformly available.

## Dataset Structure

Each row in the dataset represents a speaker turn, with the following fields:

  *id*: Unique utterance identifier
  
  *channel*: YouTube channel name
  
  *video_id*: YouTube video identifier
  
  *video_title*: Title of the podcast episode
  
  *text*: Transcript of the utterance
  
  *pos_tags*: Universal POS tags from spaCy
  
  *audio*: Audio segment (MP3 or waveform)
  
  *start_time*: Start time in the video (in seconds)
  
  *end_time*: End time in the video (in seconds)
  
  *upload_date*: Date of upload (YYYY-MM-DD)

## Dataset Creation

Early 2025

### Curation Rationale

Singapore English is underrepresented in publicly available corpora. This dataset fills a gap by providing a large, naturalistic, and contemporary corpus of spoken Singapore English in a media format. It was curated to support both linguistic research and machine learning tasks involving regional varieties of English.

### Source Data

<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
The data was created from podcast recordings by Singapore-based podcasters. See the linked paper for more details. 

### Annotations

Annotations include a unique identifier for each utterance (*id*), the YouTube channel, video identifier, and video title for the utterance (*channel*, *video_id*, *video_title*),
 the transcribed speech (*text*), the corresponding part-of-speech tags (*pos_tags*), the audio (*audio*), start and end times for the utterance in the corresponding video (*start_time*, *end_time*),
 and the date the podcast was uploaded to YouTube (*upload_date*).

#### Annotation process

Transcriptions were generated automatically using WhisperX with word-level alignment. POS tags were applied using Spacy's *en_core_web_sm* model. No manual correction was performed due to the scale of the data, but filtering and consistency checks were applied to remove low-confidence segments.

#### Who are the annotators?

The annotations were automatically generated using pre-trained machine learning models (WhisperX for ASR/alignment, pyannote for diarization, and spaCy for POS tagging). No manual annotators were involved.

#### Personal and Sensitive Information

The dataset may contain incidental personal information disclosed during podcast episodes, but it does not include any additional metadata about speakers. Since this is derived from public YouTube content, it falls under fair use for research, but researchers should be mindful of ethical implications and avoid extracting or redistributing sensitive content.

## Bias, Risks, and Limitations

Bias: The speakers are self-selected public figures or podcast hosts, and may not reflect broader Singaporean demographics.

Limitations: Automatic transcription may misrepresent speech, particularly with code-switching or non-standard pronunciation.

Risk: The dataset includes personal speech, so users should avoid downstream applications that attempt re-identification or speaker profiling.

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

### Dataset: 

**BibTeX:**

```
@misc{coats2025ycsep,
  author = {Coats, Steven and Basile, Carmelo Alessandro and Morin, Cameron and Fuchs, Robert},
  title = {The YouTube Corpus of Singapore English Podcasts (YCSEP)},
  year = {2025},
  howpublished = {\url{https://huggingface.co/datasets/stcoats/YCSEP_v1}},
  note = {Dataset}
}
```
**APA:**

Coats, S., Basile, C. A., Morin, C., & Fuchs, R. (2025). The YouTube Corpus of Singapore English Podcasts (YCSEP) [Dataset]. Hugging Face. https://huggingface.co/datasets/stcoats/YCSEP_v1

### Paper:

**BibTeX:**

```
@article{coats2024eww,
  author = {Coats, Steven and Basile, Carmelo Alessandro and Morin, Cameron and Fuchs, Robert},
  title = {The YouTube Corpus of Singapore English Podcasts},
  journal = {English World-Wide},
  year = {forthcoming}
}
```

**APA:**

Coats, S., Basile, C. A., Morin, C., & Fuchs, R. (forthcoming). The YouTube Corpus of Singapore English Podcasts. *English World-Wide*.


## Dataset Card Contact

[@stcoats](https://huggingface.co/stcoats)