File size: 12,580 Bytes
5156398 f8dd297 5156398 b461bc9 0d9b0ed 2c5a859 0d9b0ed f8dd297 0d9b0ed f8dd297 0d9b0ed f8dd297 0d9b0ed d0fea22 0d9b0ed d0fea22 0d9b0ed d0fea22 2c5a859 76b7acb 2c5a859 1b49a4f 2c5a859 1b49a4f 2c5a859 1b49a4f 2c5a859 1b49a4f 2c5a859 1b49a4f 76b7acb f8dd297 76b7acb f8dd297 76b7acb f8dd297 76b7acb f8dd297 76b7acb f8dd297 76b7acb f8dd297 76b7acb f8dd297 76b7acb d0fea22 5156398 366df31 5156398 50bd1b0 5156398 d018535 5156398 d018535 5156398 d018535 5156398 d018535 c7d2a87 d018535 5156398 d018535 5156398 5fd146d d018535 5156398 5fd146d 5156398 50bd1b0 5156398 5fd146d 5156398 5fd146d 5156398 5fd146d 5156398 5fd146d 5156398 5fd146d 5156398 1cb5304 c3a00ef 1cb5304 c3a00ef 48028a0 5fd146d d018535 48028a0 5156398 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 |
---
language:
- en
license: cc-by-nc-sa-4.0
size_categories:
- 10K<n<100K
task_categories:
- token-classification
- automatic-speech-recognition
pretty_name: YTSeg
tags:
- text segmentation
- smart chaptering
- segmentation
- youtube
- asr
configs:
- config_name: audio
data_files:
- split: train
path: audio/train-*
- split: validation
path: audio/validation-*
- split: test
path: audio/test-*
- config_name: text
data_files:
- split: train
path: text/train-*
- split: validation
path: text/validation-*
- split: test
path: text/test-*
- config_name: titles
data_files:
- split: train
path: titles/train-*
- split: validation
path: titles/validation-*
- split: test
path: titles/test-*
dataset_info:
- config_name: audio
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text_ref
list: string
- name: text_wt
list: string
- name: text_wl
list: string
- name: target_binary_ref
dtype: string
- name: target_binary_wt
dtype: string
- name: target_binary_wl
dtype: string
- name: target_text_ref
dtype: string
- name: target_text_ts_ref
dtype: string
- name: target_ts
dtype: string
- name: chapter_titles
list: string
- name: chapter_timestamps
list: float64
- name: channel_id
dtype: string
- name: video_id
dtype: string
- name: speaker_category
dtype: string
- name: dominant_speaker_proportion
dtype: float64
- name: num_speakers
dtype: int64
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 62127154526
num_examples: 16404
- name: validation
num_bytes: 5483478090
num_examples: 1447
- name: test
num_bytes: 5658475811
num_examples: 1448
download_size: 71470669858
dataset_size: 73269108427
- config_name: text
features:
- name: text_ref
sequence: string
- name: text_wt
sequence: string
- name: text_wl
sequence: string
- name: target_binary_ref
dtype: string
- name: target_binary_wt
dtype: string
- name: target_binary_wl
dtype: string
- name: target_text_ref
dtype: string
- name: target_text_ts_ref
dtype: string
- name: target_ts
dtype: string
- name: audio_path
dtype: string
- name: chapter_titles
sequence: string
- name: chapter_timestamps
sequence: float64
- name: channel_id
dtype: string
- name: video_id
dtype: string
- name: speaker_category
dtype: string
- name: dominant_speaker_proportion
dtype: float64
- name: num_speakers
dtype: int64
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 1569140592
num_examples: 16404
- name: validation
num_bytes: 141371796
num_examples: 1447
- name: test
num_bytes: 142148239
num_examples: 1448
download_size: 1035208355
dataset_size: 1852660627
- config_name: titles
features:
- name: text_section_ref
dtype: string
- name: text_section_prev_titles_ref
dtype: string
- name: target_title
dtype: string
- name: channel_id
dtype: string
- name: video_id
dtype: string
- name: chapter_idx
dtype: int64
splits:
- name: train
num_bytes: 614309842
num_examples: 146907
- name: validation
num_bytes: 55897452
num_examples: 13206
- name: test
num_bytes: 56121869
num_examples: 13082
download_size: 389373572
dataset_size: 726329163
---
# From Text Segmentation to Smart Chaptering: A Novel Benchmark for Structuring Video Transcriptions
We present <span style="font-variant:small-caps; font-weight:700;">YTSeg</span>, a topically and structurally diverse benchmark for the text segmentation task based on YouTube transcriptions. The dataset comprises 19,299 videos from 393 channels, amounting to 6,533 content hours. The topics are wide-ranging, covering domains such as science, lifestyle, politics, health, economy, and technology. The videos are from various types of content formats, such as podcasts, lectures, news, corporate events \& promotional content, and, more broadly, videos from individual content creators. We refer to the **paper** ([acl](https://aclanthology.org/2024.eacl-long.25/) | [arXiv](https://arxiv.org/abs/2402.17633)) for further information. We provide both text and audio data as well as a download script for the video data.
## Data Overview
We offer three dataset subsets:
- **Text** — For text-based segmentation and chaptering approaches using transcripts.
- **Audio** — For audio-based chaptering approaches with embedded audio.
- **Titles** — For chapter title generation given segment text (relevant for two-stage approaches).
### <span style="font-variant:small-caps;">YTSeg</span> (Text)
Each video is represented as a JSON object. The fields are organized into three categories: **Transcripts**, **Target Representations**, and **Metadata**.
#### Transcripts
We provide three transcript variants for each video: the original reference transcript and two ASR-generated transcripts using Whisper models.
| Field | Description |
|--------|-------------|
| `text_ref` | Reference transcript as a flat list of sentences. |
| `text_wt` | Whisper-tiny ASR transcript as a flat list of sentences. |
| `text_wl` | Whisper-large ASR transcript as a flat list of sentences. |
#### Target Representations
Multiple target formats are provided for different modeling approaches.
| Field | Description |
|--------|-------------|
| `target_binary_ref` | Binary segmentation labels for reference transcript (e.g., `\|=000100000010`). |
| `target_binary_wt` | Binary segmentation labels for Whisper-tiny transcript. |
| `target_binary_wl` | Binary segmentation labels for Whisper-large transcript. |
| `target_text_ref` | Structured transcript with chapter markers (e.g., `[CSTART] Title [CEND] text...`). |
| `target_text_ts_ref` | Structured transcript with timestamped chapter markers (e.g., `[CSTART] 00:01:23 - Title [CEND] text...`). |
| `target_ts` | Timestamped chapter markers only (e.g., `[CSTART] 00:01:23 - Title [CEND]\n...`). |
#### Metadata
| Field | Description |
|--------|-------------|
| `audio_path` | Path to the .mp3 file of the video. |
| `chapter_titles` | A list of chapter titles corresponding to each segment. |
| `chapter_timestamps` | A list of chapter start times in seconds (e.g., `[0.0, 25.0, 269.0]`). |
| `channel_id` | The YouTube channel ID which this video belongs to. |
| `video_id` | The YouTube video ID. |
| `speaker_category` | Speaker classification: `single`, `single_weak`, or `multiple`. |
| `dominant_speaker_proportion` | Proportion of speech from the dominant speaker (0.0-1.0). |
| `num_speakers` | Number of detected speakers in the video. |
| `duration` | Video duration in seconds. |
#### Partition Statistics
| Partition | # Examples |
|------------|--------------|
| Training | 16,404 (85%) |
| Validation | 1,447 (7.5%) |
| Testing | 1,448 (7.5%) |
| Total | 19,299 |
### <span style="font-variant:small-caps;">YTSeg</span> (Audio)
The audio config provides the complete dataset with embedded audio files. Each video is represented with the same fields as the text config, plus an `audio` field containing the preprocessed audio data.
#### Audio
| Field | Description |
|--------|-------------|
| `audio` | Audio data preprocessed into .mp3 format with a standardized sample rate of 16,000 Hz and a single channel (mono). |
All other fields (transcripts, target representations, and metadata) are identical to the Text config described above.
#### Partition Statistics
| Partition | # Examples | Size (GB) |
|------------|--------------|-----------|
| Training | 16,404 (85%) | ~57.9 GB |
| Validation | 1,447 (7.5%) | ~5.1 GB |
| Testing | 1,448 (7.5%) | ~5.3 GB |
| Total | 19,299 | ~68.3 GB |
### <span style="font-variant:small-caps;">YTSeg</span> (Titles)
Each chapter of a video is represented as a JSON object with the following fields:
| Field | Description |
|--------------|------------------------------------------------|
| `text_section_ref` | The complete chapter/section text. |
| `text_section_prev_titles_ref` | The complete chapter/section text with previous section titles prepended. |
| `target_title` | The target chapter title. |
| `channel_id` | The YouTube channel ID which this chapter's video belongs to. |
| `video_id` | The YouTube video ID which this chapter belongs to. |
| `chapter_idx` | The index and placement of the chapter in the video (e.g., the first chapter has index `0`). |
| Partition | # Examples |
|------------|--------------|
| Training | 146,907 (84.8%)|
| Validation | 13,206 (7.6%) |
| Testing | 13,082 (7.6%) |
| Total | 173,195 |
### Video Data
A download script for the video and audio data is provided.
```py
python download_videos.py
```
In the script, you can further specify a target folder (default is `./video`) and target formats in a priority list.
## Loading Data
The dataset can be loaded directly using the HuggingFace `datasets` library:
```py
from datasets import load_dataset
# Load the audio config (with embedded audio)
dataset = load_dataset("retkowski/ytseg", "audio", split="test")
# Load the text config (text-only)
dataset = load_dataset("retkowski/ytseg", "text", split="test")
# Load the titles config
dataset = load_dataset("retkowski/ytseg", "titles", split="test")
```
**Note on Binary Labels:** The binary segmentation labels (e.g., `target_binary_ref`) are prefixed with `|=` to force the field to be stored as a string, preventing leading zeros from being lost during processing. For actual usage, strip the `|=` prefix:
```py
binary_labels = dataset['target_binary_ref'].lstrip('|=')
```
## Citing
We kindly request you to cite our corresponding EACL 2024 paper if you use our dataset.
```
@inproceedings{retkowski-waibel-2024-text,
title = "From Text Segmentation to Smart Chaptering: A Novel Benchmark for Structuring Video Transcriptions",
author = "Retkowski, Fabian and Waibel, Alexander",
editor = "Graham, Yvette and Purver, Matthew",
booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = mar,
year = "2024",
address = "St. Julian{'}s, Malta",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.eacl-long.25",
pages = "406--419",
abstract = "Text segmentation is a fundamental task in natural language processing, where documents are split into contiguous sections. However, prior research in this area has been constrained by limited datasets, which are either small in scale, synthesized, or only contain well-structured documents. In this paper, we address these limitations by introducing a novel benchmark YTSeg focusing on spoken content that is inherently more unstructured and both topically and structurally diverse. As part of this work, we introduce an efficient hierarchical segmentation model MiniSeg, that outperforms state-of-the-art baselines. Lastly, we expand the notion of text segmentation to a more practical {``}smart chaptering{''} task that involves the segmentation of unstructured content, the generation of meaningful segment titles, and a potential real-time application of the models.",
}
```
## Changelog
- 20.01.2025 -- Major data and format update:
- Added ASR transcripts (Whisper-tiny and Whisper-large), structured transcript targets with timestamps, and metadata for finer-grained analysis (speaker category, dominant speaker proportion, number of speakers, duration)
- Added audio config with HuggingFace Audio feature for seamless loading with embedded audio
- Updated to use HuggingFace `datasets` library for data loading (replacing local pandas scripts and use proper HF configs)
- Updated `YTSeg[Titles]` field names for clarity
- 25.07.2024 -- Added complete list of chapter titles to `YTSeg` (`YTSeg[Titles]` is a filtered subset)
- 09.04.2024 -- Added audio data
- 27.02.2024 -- Initial release
## License
The dataset is available under the **Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) 4.0** license. We note that we do not own the copyright of the videos and as such opted to release the dataset with a non-commercial license, with the intended use to be in research and education. |