File size: 3,184 Bytes
64a09b2
20a2e9f
 
 
 
64a09b2
 
 
20a2e9f
 
 
 
 
 
 
64a09b2
 
20a2e9f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
title: Pre-Punctuation Processor
emoji: 📜
colorFrom: yellow
colorTo: gray
sdk: gradio
app_file: app.py
pinned: false
license: mit
tags:
  - philosophy
  - nlp
  - training-data
  - classical-texts
  - character-level
---

# Pre-Punctuation Processor

A text processing pipeline that prepares ancient philosophical texts as training data for character-level language models, stripping them back to a pre-punctuation form faithful to how they were originally composed and spoken.

## Why Pre-Punctuation?

The philosophical texts in this corpus — Aristotle, Plato, Euclid, Seneca, Epictetus, Marcus Aurelius — were composed in an era before modern punctuation existed. Ancient Greek was written in *scriptio continua*: an unbroken stream of uppercase letters with no spaces, no commas, no quotation marks, no paragraph breaks.

The first systematic punctuation was invented by **Aristophanes of Byzantium** (c. 257–185 BC), head librarian of the Library of Alexandria. He devised a system of single dots (*théseis*) placed at different heights to mark breathing pauses for readers:

- **stigmḕ mésē** (·) mid-level dot — a short pause (*komma*)
- **hypostigmḗ** (.) low dot — a medium pause (*kolon*)
- **stigmḕ teleía** (˙) high dot — a full stop (*periodos*)

This system was a reading aid, not part of the texts themselves. The words of the philosophers predated any notation for pauses or structure.

## The Period as Pause Marker

This pipeline reduces all punctuation to a single mark: the **period** — a direct descendant of Aristophanes' dot system. In our output, the period functions not as a grammatical construct but as what it originally was: a marker for a pause in speech.

The resulting vocabulary is exactly **28 characters**: the 26 lowercase Latin letters, a space, and a period.

## What This Tool Does

1. **Strips all non-body content** — Prefaces, editor's notes, appendixes, transcriber corrections, publisher info, and source boilerplate (Gutenberg, MIT Classics, Internet Archive) are aggressively removed. Only the philosopher's own words remain.
2. **Converts numerals to words** — Both Arabic (600 → "six hundred") and Roman (XIV → "fourteen") numerals become English words.
3. **Normalizes to 28-char vocabulary** — Unicode normalized to ASCII, lowercased, all punctuation except period removed.
4. **Chunks for training** — Text split into 40–256 character chunks at sentence boundaries.
5. **Publishes to HuggingFace** — Train/validation splits pushed as a dataset for direct loading in notebooks.

## Usage

**Drag and drop** a .txt, .epub, or .zip file, or paste a URL from Project Gutenberg, MIT Internet Classics, or the Internet Archive. The pipeline processes it and adds it to the corpus.

**Search the Internet Archive** to browse and add classical texts directly.

**Push to HuggingFace** to make the dataset available anywhere:

```python
from datasets import load_dataset
ds = load_dataset("LisaMegaWatts/philosophy-corpus")
```

## Built for JuliaGPT

The output is designed for training a character-level GPT implemented in Julia, with a target vocabulary of 29 tokens (28 characters + BOS).