Datasets:
license: cc-by-nc-sa-4.0
language:
- en
tags:
- text segmentation
- smart chaptering
- segmentation
- youtube
- asr
pretty_name: YTSeg
size_categories:
- 10K<n<100K
task_categories:
- token-classification
- automatic-speech-recognition
configs:
- config_name: audio
data_files:
- split: train
path: audio/train-*
- split: validation
path: audio/validation-*
- split: test
path: audio/test-*
- config_name: text
data_files:
- split: train
path: data/partitions/yt_seg.train.json
- split: validation
path: data/partitions/yt_seg.val.json
- split: test
path: data/partitions/yt_seg.test.json
- config_name: titles
data_files:
- split: train
path: data/partitions/yt_seg_titles.train.json
- split: validation
path: data/partitions/yt_seg_titles.val.json
- split: test
path: data/partitions/yt_seg_titles.test.json
dataset_info:
config_name: audio
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text_ref
list: string
- name: text_wt
list: string
- name: text_wl
list: string
- name: target_binary_ref
dtype: string
- name: target_binary_wt
dtype: string
- name: target_binary_wl
dtype: string
- name: target_text_ref
dtype: string
- name: target_text_ts_ref
dtype: string
- name: target_ts
dtype: string
- name: chapter_titles
list: string
- name: chapter_timestamps
list: float64
- name: channel_id
dtype: string
- name: video_id
dtype: string
- name: speaker_category
dtype: string
- name: dominant_speaker_proportion
dtype: float64
- name: num_speakers
dtype: int64
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 62127154526
num_examples: 16404
- name: validation
num_bytes: 5483478090
num_examples: 1447
- name: test
num_bytes: 5658475811
num_examples: 1448
download_size: 71470669858
dataset_size: 73269108427
From Text Segmentation to Smart Chaptering: A Novel Benchmark for Structuring Video Transcriptions
We present YTSeg, a topically and structurally diverse benchmark for the text segmentation task based on YouTube transcriptions. The dataset comprises 19,299 videos from 393 channels, amounting to 6,533 content hours. The topics are wide-ranging, covering domains such as science, lifestyle, politics, health, economy, and technology. The videos are from various types of content formats, such as podcasts, lectures, news, corporate events & promotional content, and, more broadly, videos from individual content creators. We refer to the paper (acl | arXiv) for further information. We provide both text and audio data as well as a download script for the video data.
Data Overview
We offer three dataset subsets:
- Text — For text-based segmentation and chaptering approaches using transcripts.
- Audio — For audio-based chaptering approaches with embedded audio.
- Titles — For chapter title generation given segment text (relevant for two-stage approaches).
YTSeg (Text)
Each video is represented as a JSON object. The fields are organized into three categories: Transcripts, Target Representations, and Metadata.
Transcripts
We provide three transcript variants for each video: the original reference transcript and two ASR-generated transcripts using Whisper models.
| Field | Description |
|---|---|
text_ref |
Reference transcript as a flat list of sentences. |
text_wt |
Whisper-tiny ASR transcript as a flat list of sentences. |
text_wl |
Whisper-large ASR transcript as a flat list of sentences. |
Target Representations
Multiple target formats are provided for different modeling approaches.
| Field | Description |
|---|---|
target_binary_ref |
Binary segmentation labels for reference transcript (e.g., |=000100000010). |
target_binary_wt |
Binary segmentation labels for Whisper-tiny transcript. |
target_binary_wl |
Binary segmentation labels for Whisper-large transcript. |
target_text_ref |
Structured transcript with chapter markers (e.g., [CSTART] Title [CEND] text...). |
target_text_ts_ref |
Structured transcript with timestamped chapter markers (e.g., [CSTART] 00:01:23 - Title [CEND] text...). |
target_ts |
Timestamped chapter markers only (e.g., [CSTART] 00:01:23 - Title [CEND]\n...). |
Metadata
| Field | Description |
|---|---|
audio_path |
Path to the .mp3 file of the video. |
chapter_titles |
A list of chapter titles corresponding to each segment. |
chapter_timestamps |
A list of chapter start times in seconds (e.g., [0.0, 25.0, 269.0]). |
channel_id |
The YouTube channel ID which this video belongs to. |
video_id |
The YouTube video ID. |
speaker_category |
Speaker classification: single, single_weak, or multiple. |
dominant_speaker_proportion |
Proportion of speech from the dominant speaker (0.0-1.0). |
num_speakers |
Number of detected speakers in the video. |
duration |
Video duration in seconds. |
Partition Statistics
| Partition | # Examples |
|---|---|
| Training | 16,404 (85%) |
| Validation | 1,447 (7.5%) |
| Testing | 1,448 (7.5%) |
| Total | 19,299 |
YTSeg (Audio)
The audio config provides the complete dataset with embedded audio files. Each video is represented with the same fields as the text config, plus an audio field containing the preprocessed audio data.
Audio
| Field | Description |
|---|---|
audio |
Audio data preprocessed into .mp3 format with a standardized sample rate of 16,000 Hz and a single channel (mono). |
All other fields (transcripts, target representations, and metadata) are identical to the Text config described above.
Partition Statistics
| Partition | # Examples | Size (GB) |
|---|---|---|
| Training | 16,404 (85%) | ~57.9 GB |
| Validation | 1,447 (7.5%) | ~5.1 GB |
| Testing | 1,448 (7.5%) | ~5.3 GB |
| Total | 19,299 | ~68.3 GB |
YTSeg (Titles)
Each chapter of a video is represented as a JSON object with the following fields:
| Field | Description |
|---|---|
text_section_ref |
The complete chapter/section text. |
text_section_prev_titles_ref |
The complete chapter/section text with previous section titles prepended. |
target_title |
The target chapter title. |
channel_id |
The YouTube channel ID which this chapter's video belongs to. |
video_id |
The YouTube video ID which this chapter belongs to. |
chapter_idx |
The index and placement of the chapter in the video (e.g., the first chapter has index 0). |
| Partition | # Examples |
|---|---|
| Training | 146,907 (84.8%) |
| Validation | 13,206 (7.6%) |
| Testing | 13,082 (7.6%) |
| Total | 173,195 |
Video Data
A download script for the video and audio data is provided.
python download_videos.py
In the script, you can further specify a target folder (default is ./video) and target formats in a priority list.
Loading Data
The dataset can be loaded directly using the HuggingFace datasets library:
from datasets import load_dataset
# Load the audio config (with embedded audio)
dataset = load_dataset("retkowski/ytseg", "audio", split="test")
# Load the text config (text-only)
dataset = load_dataset("retkowski/ytseg", "text", split="test")
# Load the titles config
dataset = load_dataset("retkowski/ytseg", "titles", split="test")
Note on Binary Labels: The binary segmentation labels (e.g., target_binary_ref) are prefixed with |= to force the field to be stored as a string, preventing leading zeros from being lost during processing. For actual usage, strip the |= prefix:
binary_labels = dataset['target_binary_ref'].lstrip('|=')
Citing
We kindly request you to cite our corresponding EACL 2024 paper if you use our dataset.
@inproceedings{retkowski-waibel-2024-text,
title = "From Text Segmentation to Smart Chaptering: A Novel Benchmark for Structuring Video Transcriptions",
author = "Retkowski, Fabian and Waibel, Alexander",
editor = "Graham, Yvette and Purver, Matthew",
booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = mar,
year = "2024",
address = "St. Julian{'}s, Malta",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.eacl-long.25",
pages = "406--419",
abstract = "Text segmentation is a fundamental task in natural language processing, where documents are split into contiguous sections. However, prior research in this area has been constrained by limited datasets, which are either small in scale, synthesized, or only contain well-structured documents. In this paper, we address these limitations by introducing a novel benchmark YTSeg focusing on spoken content that is inherently more unstructured and both topically and structurally diverse. As part of this work, we introduce an efficient hierarchical segmentation model MiniSeg, that outperforms state-of-the-art baselines. Lastly, we expand the notion of text segmentation to a more practical {``}smart chaptering{''} task that involves the segmentation of unstructured content, the generation of meaningful segment titles, and a potential real-time application of the models.",
}
Changelog
- 20.01.2025 -- Major data and format update:
- Added ASR transcripts (Whisper-tiny and Whisper-large), structured transcript targets with timestamps, and metadata for finer-grained analysis (speaker category, dominant speaker proportion, number of speakers, duration)
- Added audio config with HuggingFace Audio feature for seamless loading with embedded audio
- Updated to use HuggingFace
datasetslibrary for data loading (replacing local pandas scripts and use proper HF configs) - Updated
YTSeg[Titles]field names for clarity
- 25.07.2024 -- Added complete list of chapter titles to
YTSeg(YTSeg[Titles]is a filtered subset) - 09.04.2024 -- Added audio data
- 27.02.2024 -- Initial release
License
The dataset is available under the Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) 4.0 license. We note that we do not own the copyright of the videos and as such opted to release the dataset with a non-commercial license, with the intended use to be in research and education.