--- title: JuliaSLM emoji: 🏛️ colorFrom: purple colorTo: blue sdk: docker app_port: 7860 pinned: false license: mit tags: - julia - lux - slm - philosophy - openai-compatible - bpe - rope - rmsnorm - swiglu --- # JuliaSLM A decoder-only transformer (RoPE, RMSNorm, SwiGLU) trained on classical philosophy texts, implemented in Julia with Lux.jl. Serves an OpenAI-compatible API with streaming support. ## Endpoints - `GET /` — Health check and model info - `GET /v1/models` — List available models - `POST /v1/chat/completions` — Generate text (supports streaming, top-k, top-p) ## Usage ```bash # Non-streaming curl -X POST https://your-space.hf.space/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"messages": [{"role": "user", "content": "the nature of"}], "max_tokens": 200}' # Streaming curl -X POST https://your-space.hf.space/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"messages": [{"role": "user", "content": "the nature of"}], "stream": true, "temperature": 0.7, "top_k": 40}' ``` ## Architecture - **Model**: ~5M params, 256d embed, 6 layers, 4 heads - **Tokenizer**: BPE (2000 tokens) - **Framework**: Lux.jl (explicit parameter/state management) - **Positional encoding**: Rotary Position Embeddings (RoPE) - **Normalization**: RMSNorm (pre-norm) - **Feed-forward**: SwiGLU activation - **Weight tying**: Shared embedding/output projection - **Inference**: CPU-only, no Lux dependency at runtime (pure NNlib) ## Required HF Model Repo Files Upload these to `LisaMegaWatts/JuliaSLM` (or set `HF_REPO` env var): - `final.jld2` — Trained model checkpoint (parameters) - `config.toml` — Model architecture configuration (from 5m.toml) - `vocab.json` — BPE vocabulary (dict format: `{"token": id, ...}`) - `merges.txt` — BPE merge rules ## Environment Variables - `HF_REPO` — HuggingFace model repo (default: `LisaMegaWatts/JuliaSLM`) - `PORT` — Server port (default: `7860`)