pre1900-training / README.md
mhla's picture
Add files using upload-large-folder tool
25d35e5 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string

Pre-1900 Training Corpus

Chunked and resharded pre-1900 English text corpus, ready for language model training.

Format

  • 266 parquet shards (265 train + 1 validation)
  • 12.8M documents (chunks of ≤8,000 characters)
  • ~22B tokens estimated
  • Text-only — single text column per row
  • Row groups divisible by 8 for even DDP distribution across GPUs
  • Last shard (shard_00265) is the validation split

Processing Pipeline

Built from the full pre-1900 filtered corpus through:

  1. OCR cleanup — removal of OCR artifacts, boilerplate, and unicode normalization
  2. Quality filtering — token frequency prior-based filtering
  3. Anachronism detection — three-tier post-1900 physics filter
  4. Document chunking — long documents split at paragraph/sentence boundaries (max 8K chars, min 200 chars)
  5. Token balancing — sort-by-length + round-robin distribution across shards for even token counts

Usage

from datasets import load_dataset
ds = load_dataset("mhla/pre1900-training")

Related