| --- |
| dataset_info: |
| features: |
| - name: text |
| dtype: string |
| --- |
| |
| # Pre-1900 Training Corpus |
|
|
| Chunked and resharded pre-1900 English text corpus, ready for language model training. |
|
|
| ## Format |
|
|
| - **266 parquet shards** (265 train + 1 validation) |
| - **12.8M documents** (chunks of ≤8,000 characters) |
| - **~22B tokens** estimated |
| - **Text-only** — single `text` column per row |
| - Row groups divisible by 8 for even DDP distribution across GPUs |
| - Last shard (`shard_00265`) is the validation split |
|
|
| ## Processing Pipeline |
|
|
| Built from the full pre-1900 filtered corpus through: |
|
|
| 1. **OCR cleanup** — removal of OCR artifacts, boilerplate, and unicode normalization |
| 2. **Quality filtering** — token frequency prior-based filtering |
| 3. **Anachronism detection** — three-tier post-1900 physics filter |
| 4. **Document chunking** — long documents split at paragraph/sentence boundaries (max 8K chars, min 200 chars) |
| 5. **Token balancing** — sort-by-length + round-robin distribution across shards for even token counts |
|
|
| ## Usage |
|
|
| ```python |
| from datasets import load_dataset |
| ds = load_dataset("mhla/pre1900-training") |
| ``` |
|
|
| ## Related |
|
|
| - [`mhla/pre1900-corpus`](https://huggingface.co/datasets/mhla/pre1900-corpus) — full documents with metadata (title, year, source, OCR scores) |
| - [`mhla/gpt1900-d26-8btok`](https://huggingface.co/mhla/gpt1900-d26-8btok) — GPT-1900 model trained on this data |
|
|