metadata
license: cc-by-sa-4.0
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18070709909
num_examples: 121082466
download_size: 9046108313
dataset_size: 18070709909
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Wikipedia Utterances
Text segments extracted from English Wikipedia, segmented into utterance-length chunks suitable for text-to-speech synthesis.
Dataset Description
This dataset contains ~41M text utterances derived from the wikimedia/wikipedia dataset (20231101.en snapshot). Each row contains:
| Field | Type | Description |
|---|---|---|
title |
string | The Wikipedia article title |
text |
string | A text segment (10-4,880 characters) |
duration |
float | Estimated speech duration in seconds (at 150 WPM) |
Processing
The transformation pipeline:
- Tokenizes Wikipedia articles into paragraphs and sentences using NLTK
- Combines consecutive sentences targeting 15-30 second utterances
- Strips bracketed content (parentheses, braces, square brackets)
- Filters for valid utterances ending in sentence-final punctuation
See transform_wikipedia.py in this repository for the full implementation.
Usage
from datasets import load_dataset
dataset = load_dataset("jspaulsen/wikipedia-utterances", split="train")
License
This dataset is released under CC BY-SA 4.0, consistent with the original Wikipedia content license.
Citation
If you use this dataset, please cite the original Wikimedia source:
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}