Add model card
Browse files
README.md
ADDED
|
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: en
|
| 3 |
+
license: mit
|
| 4 |
+
pipeline_tag: text-generation
|
| 5 |
+
tags:
|
| 6 |
+
- pytorch
|
| 7 |
+
- llama-style
|
| 8 |
+
- rope
|
| 9 |
+
- swiglu
|
| 10 |
+
- gqa
|
| 11 |
+
- rmsnorm
|
| 12 |
+
- bpe
|
| 13 |
+
- philosophy
|
| 14 |
+
- openai-compatible
|
| 15 |
+
- symbiogenesis
|
| 16 |
+
- distillation
|
| 17 |
+
- cross-species
|
| 18 |
+
model-index:
|
| 19 |
+
- name: JuliaFluxGPT-distilled
|
| 20 |
+
results:
|
| 21 |
+
- task:
|
| 22 |
+
type: text-generation
|
| 23 |
+
name: Philosophy Text Generation
|
| 24 |
+
dataset:
|
| 25 |
+
type: custom
|
| 26 |
+
name: Classical Philosophy Corpus (266M tokens)
|
| 27 |
+
metrics:
|
| 28 |
+
- type: loss
|
| 29 |
+
name: Val Loss
|
| 30 |
+
value: 3.687
|
| 31 |
+
- type: perplexity
|
| 32 |
+
name: Perplexity
|
| 33 |
+
value: 39.9
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
# JuliaFluxGPT-distilled
|
| 37 |
+
|
| 38 |
+
Cross-species knowledge distillation: two JuliaFluxGPT siblings — v1 (JuliaSLM fusion, val_loss=3.687) and v2 (Pythia-14m fusion, val_loss=3.856) — serve as co-teachers for a student model. The student inherits v1's perplexity advantage and v2's linguistic quality (superior grammar, coherence, and syntactic complexity).
|
| 39 |
+
|
| 40 |
+
## Why Distillation?
|
| 41 |
+
|
| 42 |
+
Weight-level fusion between v1 and v2 fails catastrophically — even 50/50 alpha blend produces loss=10.3. Evolutionary per-layer search, SLERP, and Kuramoto sync all fail. The models occupy separate loss basins after being fused from different parent species (JuliaSLM vs Pythia-14m). Knowledge distillation bypasses this by operating on output distributions instead of weight matrices.
|
| 43 |
+
|
| 44 |
+
## Architecture
|
| 45 |
+
|
| 46 |
+
| | |
|
| 47 |
+
|---|---|
|
| 48 |
+
| **Parameters** | ~23M |
|
| 49 |
+
| **Embedding dim** | 512 |
|
| 50 |
+
| **Layers** | 8 |
|
| 51 |
+
| **Attention** | GQA (8 query, 2 KV heads) |
|
| 52 |
+
| **Head dim** | 64 |
|
| 53 |
+
| **FFN** | SwiGLU (1344 inner) |
|
| 54 |
+
| **Normalization** | RMSNorm (pre-norm) |
|
| 55 |
+
| **Position encoding** | RoPE (base=10000) |
|
| 56 |
+
| **Context length** | 256 |
|
| 57 |
+
| **Vocab** | 2000 (BPE) |
|
| 58 |
+
| **Weight tying** | Yes (embedding = output projection) |
|
| 59 |
+
|
| 60 |
+
## Training
|
| 61 |
+
|
| 62 |
+
| | |
|
| 63 |
+
|---|---|
|
| 64 |
+
| **Method** | Knowledge distillation (warm start) |
|
| 65 |
+
| **Teacher 1** | JuliaFluxGPT-fused v1 (JuliaSLM fusion, val_loss=3.687) |
|
| 66 |
+
| **Teacher 2** | JuliaFluxGPT-fused v2 (Pythia fusion, val_loss=3.856) |
|
| 67 |
+
| **Student init** | v1 weights (warm start) |
|
| 68 |
+
| **Loss** | 0.35 KL(s\|\|v1) + 0.35 KL(s\|\|v2) + 0.30 CE(s, targets) |
|
| 69 |
+
| **Temperature** | 3.0 |
|
| 70 |
+
| **Steps** | 3000 (best at step 2600) |
|
| 71 |
+
| **LR** | 3e-4 (cosine decay) |
|
| 72 |
+
| **Optimizer** | AdamW (weight_decay=0.01) |
|
| 73 |
+
| **Val loss** | 3.687 (beats both parents) |
|
| 74 |
+
|
| 75 |
+
## Scaling Context
|
| 76 |
+
|
| 77 |
+
| Model | Params | d_model | Val Loss | Method |
|
| 78 |
+
|-------|--------|---------|----------|--------|
|
| 79 |
+
| MicroJulia | 1M | 192 | — | Baseline |
|
| 80 |
+
| JuliaSLM | 5M | 256 | 3.54 | Baseline |
|
| 81 |
+
| SymbioSLM | 5M | 256 | 3.48 | Multi-organelle |
|
| 82 |
+
| MonarchSLM | 5M | 256 | 3.51 | Monarch matrices |
|
| 83 |
+
| JuliaFluxGPT-fused (v1) | 23M | 512 | 3.698 | JuliaSLM fusion |
|
| 84 |
+
| JuliaFluxGPT-fused-v2 | 23M | 512 | 3.873 | Pythia fusion |
|
| 85 |
+
| **JuliaFluxGPT-distilled** | **23M** | **512** | **3.687** | **v1+v2 distillation** |
|
| 86 |
+
|
| 87 |
+
## Files
|
| 88 |
+
|
| 89 |
+
| File | Description |
|
| 90 |
+
|------|-------------|
|
| 91 |
+
| `juliaflux_distilled_warm_best.pt` | Best checkpoint (step 2600, val_loss=3.687) |
|
| 92 |
+
| `juliaflux_model.py` | Model definition (JuliaFluxGPT class) |
|
| 93 |
+
| `vocab.json` | BPE vocabulary (2000 tokens) |
|
| 94 |
+
| `merges.txt` | BPE merge rules |
|
| 95 |
+
|
| 96 |
+
## Links
|
| 97 |
+
|
| 98 |
+
- **Inference Space**: [LisaMegaWatts/JuliaFluxGPT-distilled](https://huggingface.co/spaces/LisaMegaWatts/JuliaFluxGPT-distilled)
|
| 99 |
+
- **Parent v1**: [LisaMegaWatts/JuliaFluxGPT-fused](https://huggingface.co/LisaMegaWatts/JuliaFluxGPT-fused)
|
| 100 |
+
- **Parent v2**: [LisaMegaWatts/JuliaFluxGPT-fused-v2](https://huggingface.co/LisaMegaWatts/JuliaFluxGPT-fused-v2)
|
| 101 |
+
- **Source code**: [DavinciDreams/SymbioGPT](https://github.com/DavinciDreams/SymbioGPT)
|
| 102 |
+
- **W&B project**: [symbiogenesis](https://wandb.ai/lisamegawatts-decentralized-intelligence-agency/symbiogenesis)
|