juliagpt-data / README.md
LisaMegaWatts's picture
Upload README.md with huggingface_hub
550b9f1 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_examples: 7614
    - name: validation
      num_examples: 846
language:
  - en
license: mit
tags:
  - character-level
  - philosophy
  - mathematics
  - julia
  - scriptio-continua
  - reduced-vocab
size_categories:
  - 1K<n<10K

JuliaGPT Training Data

Character-level training corpus for JuliaGPT, an experimental GPT in pure Julia exploring minimal vocabularies inspired by ancient Greek scriptio continua.

Sources (ordered, not shuffled)

  1. Aristotle - Rhetoric (5,460 chunks) — MIT Classics
  2. Euclid - The Elements (3,001 chunks) — Project Gutenberg

Vocabulary

28 characters: a-z + space + period. Numerals converted to English words. All other punctuation removed. Plus BOS token = 29 vocab total.

Processing

  • Lowercase only
  • Numerals converted to words (e.g. "42" becomes "forty two")
  • All punctuation removed except period
  • Chunks max 256 chars, split at sentence boundaries

Format

One chunk per line in .

Usage