metadata
dataset_info:
features:
- name: text
dtype: string
Pre-1900 Training Corpus
Chunked and resharded pre-1900 English text corpus, ready for language model training.
Format
- 266 parquet shards (265 train + 1 validation)
- 12.8M documents (chunks of ≤8,000 characters)
- ~22B tokens estimated
- Text-only — single
textcolumn per row - Row groups divisible by 8 for even DDP distribution across GPUs
- Last shard (
shard_00265) is the validation split
Processing Pipeline
Built from the full pre-1900 filtered corpus through:
- OCR cleanup — removal of OCR artifacts, boilerplate, and unicode normalization
- Quality filtering — token frequency prior-based filtering
- Anachronism detection — three-tier post-1900 physics filter
- Document chunking — long documents split at paragraph/sentence boundaries (max 8K chars, min 200 chars)
- Token balancing — sort-by-length + round-robin distribution across shards for even token counts
Usage
from datasets import load_dataset
ds = load_dataset("mhla/pre1900-training")
Related
mhla/pre1900-corpus— full documents with metadata (title, year, source, OCR scores)mhla/gpt1900-d26-8btok— GPT-1900 model trained on this data