Spaces:
Sleeping
Sleeping
File size: 1,448 Bytes
adb4c3f 5076bf9 adb4c3f 5076bf9 adb4c3f 5076bf9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | ---
title: JuliaGPT
emoji: 🏛️
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: mit
short_description: JuliaGPT — experimental GPT in pure Julia
app_port: 7860
---
# JuliaGPT
An experimental character-level GPT in pure Julia exploring minimal vocabularies inspired by ancient Greek *scriptio continua*.
Built from scratch with scalar autograd — no ML frameworks, just pure Julia.
Trained on Aristotle's Rhetoric and Euclid's Elements with a 28-character vocabulary (a-z + space + period).
## API
OpenAI-compatible inference endpoint:
```bash
curl -X POST https://lisamegawatts-juliagpt.hf.space/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":""}],"temperature":0.8,"max_tokens":128}'
```
### Endpoints
| Method | Path | Description |
|--------|------|-------------|
| GET | `/` | Health check |
| GET | `/v1/models` | List models |
| POST | `/v1/chat/completions` | Generate text |
## Architecture
- 1 transformer layer, 16-dim embeddings, 4 attention heads
- Custom scalar autograd engine (`Value` type)
- Character-level tokenizer — 28 chars + BOS = 29 vocab
- KV cache for efficient inference
- block_size=256
- ~5,000 parameters
## Links
- [Model checkpoint](https://huggingface.co/LisaMegaWatts/JuliaGPT)
- [Training data](https://huggingface.co/datasets/LisaMegaWatts/juliagpt-data)
- [Source code](https://github.com/DavinciDreams/JuliaGPT)
|