Spaces:
Running
Running
| title: JuliaGPT | |
| emoji: ποΈ | |
| colorFrom: blue | |
| colorTo: purple | |
| sdk: docker | |
| pinned: false | |
| license: mit | |
| short_description: JuliaGPT β experimental GPT in pure Julia | |
| app_port: 7860 | |
| # JuliaGPT | |
| An experimental character-level GPT in pure Julia exploring minimal vocabularies inspired by ancient Greek *scriptio continua*. | |
| Built from scratch with scalar autograd β no ML frameworks, just pure Julia. | |
| Trained on Aristotle's Rhetoric and Euclid's Elements with a 28-character vocabulary (a-z + space + period). | |
| ## API | |
| OpenAI-compatible inference endpoint: | |
| ```bash | |
| curl -X POST https://lisamegawatts-juliagpt.hf.space/v1/chat/completions \ | |
| -H "Content-Type: application/json" \ | |
| -d '{"messages":[{"role":"user","content":""}],"temperature":0.8,"max_tokens":128}' | |
| ``` | |
| ### Endpoints | |
| | Method | Path | Description | | |
| |--------|------|-------------| | |
| | GET | `/` | Health check | | |
| | GET | `/v1/models` | List models | | |
| | POST | `/v1/chat/completions` | Generate text | | |
| ## Architecture | |
| - 1 transformer layer, 16-dim embeddings, 4 attention heads | |
| - Custom scalar autograd engine (`Value` type) | |
| - Character-level tokenizer β 28 chars + BOS = 29 vocab | |
| - KV cache for efficient inference | |
| - block_size=256 | |
| - ~5,000 parameters | |
| ## Links | |
| - [Model checkpoint](https://huggingface.co/LisaMegaWatts/JuliaGPT) | |
| - [Training data](https://huggingface.co/datasets/LisaMegaWatts/juliagpt-data) | |
| - [Source code](https://github.com/DavinciDreams/JuliaGPT) | |