sumit commited on
Commit Β·
4049aa7
1
Parent(s): 4fa3232
updated model card with ablation results and all 4 runs
Browse files
README.md
CHANGED
|
@@ -21,13 +21,15 @@ pipeline_tag: text-generation
|
|
| 21 |
|
| 22 |
Checkpoints from a research project studying expert specialization in Mixture-of-Experts models. I fine-tuned GPT-2 small on three domains -- code, math, and prose -- to see whether experts naturally specialize by domain when given the right routing incentives.
|
| 23 |
|
| 24 |
-
Short answer: they do. MoE beats the dense baseline by 3.6% overall and 14% on math, with zero expert collapse across 10,000 training steps.
|
| 25 |
|
| 26 |
---
|
| 27 |
|
| 28 |
## 1. Results
|
| 29 |
|
| 30 |
-
|
|
|
|
|
|
|
| 31 |
|---|---|---|---|
|
| 32 |
| eval/loss | 2.157 | 2.080 | -3.6% |
|
| 33 |
| loss_code | 1.554 | 1.521 | -2.1% |
|
|
@@ -37,6 +39,13 @@ Short answer: they do. MoE beats the dense baseline by 3.6% overall and 14% on m
|
|
| 37 |
|
| 38 |
Math benefits the most from expert routing. Prose is the one domain where dense wins; diverse web text doesn't lend itself to clean expert specialization. The MoE model crossed the dense baseline at step ~3,600 (36% of training).
|
| 39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
---
|
| 41 |
|
| 42 |
## 2. Files
|
|
@@ -45,14 +54,28 @@ Math benefits the most from expert routing. Prose is the one domain where dense
|
|
| 45 |
dense-baseline/
|
| 46 |
βββ final-model.safetensors # 622 MB -- dense GPT-2, 124M params
|
| 47 |
βββ final-model.json # metadata sidecar (config, metrics)
|
| 48 |
-
βββ ckpt-step-4999.pt # 1.4 GB -- full resume checkpoint
|
| 49 |
βββ metrics.jsonl # per-step training + eval metrics
|
| 50 |
|
| 51 |
moe-main/
|
| 52 |
βββ final-model.safetensors # 1.1 GB -- MoE GPT-2, 257M params (8 experts Γ 4 layers)
|
| 53 |
-
βββ final-model.json # metadata sidecar
|
| 54 |
βββ ckpt-step-9999.pt # 2.9 GB -- full resume checkpoint
|
| 55 |
-
βββ metrics.jsonl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
```
|
| 57 |
|
| 58 |
The `.safetensors` files are the trained model weights. The `.pt` files contain the full training state for resuming runs (optimizer, LR scheduler, RNG states). The `.json` sidecars store architecture config and final eval metrics.
|
|
@@ -119,7 +142,7 @@ Routing uses the Straight-Through Estimator. Forward pass routes to one expert w
|
|
| 119 |
|
| 120 |
## 5. Training
|
| 121 |
|
| 122 |
-
|
| 123 |
|
| 124 |
| Domain | Source | Size |
|
| 125 |
|---|---|---|
|
|
@@ -129,17 +152,18 @@ Both models trained on ~6.6M tokens across three domains, balanced to equal toke
|
|
| 129 |
|
| 130 |
Training config:
|
| 131 |
|
| 132 |
-
| Parameter | Dense | MoE |
|
| 133 |
-
|---|---|---|
|
| 134 |
-
| Max steps | 5,000 | 10,000 |
|
| 135 |
-
| Batch size |
|
| 136 |
-
| Block size | 512 | 512 |
|
| 137 |
-
| Learning rate | 5e-5 | 5e-5 |
|
| 138 |
-
|
|
| 139 |
-
|
|
| 140 |
-
| Hardware | 1Γ RTX 4090
|
| 141 |
-
| Wall time | ~30 min | ~85 min |
|
| 142 |
-
|
|
|
|
| 143 |
|
| 144 |
---
|
| 145 |
|
|
@@ -149,13 +173,15 @@ Training curves are on Weights & Biases:
|
|
| 149 |
|
| 150 |
- [Dense baseline](https://wandb.ai/sumit-ml/moe-emergence/runs/fqhfblfv)
|
| 151 |
- [MoE main run](https://wandb.ai/sumit-ml/moe-emergence/runs/j08s2d1m)
|
|
|
|
|
|
|
| 152 |
|
| 153 |
---
|
| 154 |
|
| 155 |
## 7. Links
|
| 156 |
|
| 157 |
- Code: [github.com/sumitdotml/moe-emergence](https://github.com/sumitdotml/moe-emergence)
|
| 158 |
-
- Experiment docs: [run-004 (dense)](https://github.com/sumitdotml/moe-emergence/blob/main/docs/experiments/run-004-dense-baseline.md), [run-005 (MoE)](https://github.com/sumitdotml/moe-emergence/blob/main/docs/experiments/run-005-moe-main.md)
|
| 159 |
|
| 160 |
---
|
| 161 |
|
|
|
|
| 21 |
|
| 22 |
Checkpoints from a research project studying expert specialization in Mixture-of-Experts models. I fine-tuned GPT-2 small on three domains -- code, math, and prose -- to see whether experts naturally specialize by domain when given the right routing incentives.
|
| 23 |
|
| 24 |
+
Short answer: they do. MoE beats the dense baseline by 3.6% overall and 14% on math, with zero expert collapse across 10,000 training steps. Two ablation runs confirmed that load balancing loss is essential (without it, one expert captures 73.6% of tokens by step 500) and that top-2 routing provides negligible improvement over top-1.
|
| 25 |
|
| 26 |
---
|
| 27 |
|
| 28 |
## 1. Results
|
| 29 |
|
| 30 |
+
### Main comparison
|
| 31 |
+
|
| 32 |
+
| Metric | Dense Baseline | MoE (top-1) | Delta |
|
| 33 |
|---|---|---|---|
|
| 34 |
| eval/loss | 2.157 | 2.080 | -3.6% |
|
| 35 |
| loss_code | 1.554 | 1.521 | -2.1% |
|
|
|
|
| 39 |
|
| 40 |
Math benefits the most from expert routing. Prose is the one domain where dense wins; diverse web text doesn't lend itself to clean expert specialization. The MoE model crossed the dense baseline at step ~3,600 (36% of training).
|
| 41 |
|
| 42 |
+
### Ablations
|
| 43 |
+
|
| 44 |
+
| Run | What it tests | Result |
|
| 45 |
+
|---|---|---|
|
| 46 |
+
| No-LB ablation | Remove load balancing loss (`lb_coef=0.0`) | Expert collapse at step 500. Single expert handles 73.6% of tokens. Z-loss alone doesn't prevent it. |
|
| 47 |
+
| Top-2 directional | Route to 2 experts instead of 1 | eval/loss=2.077 vs top-1's 2.080, which is a 0.14% difference. Not worth 2x expert compute. |
|
| 48 |
+
|
| 49 |
---
|
| 50 |
|
| 51 |
## 2. Files
|
|
|
|
| 54 |
dense-baseline/
|
| 55 |
βββ final-model.safetensors # 622 MB -- dense GPT-2, 124M params
|
| 56 |
βββ final-model.json # metadata sidecar (config, metrics)
|
| 57 |
+
βββ ckpt-step-4999.pt # 1.4 GB -- full resume checkpoint
|
| 58 |
βββ metrics.jsonl # per-step training + eval metrics
|
| 59 |
|
| 60 |
moe-main/
|
| 61 |
βββ final-model.safetensors # 1.1 GB -- MoE GPT-2, 257M params (8 experts Γ 4 layers)
|
| 62 |
+
βββ final-model.json # metadata sidecar
|
| 63 |
βββ ckpt-step-9999.pt # 2.9 GB -- full resume checkpoint
|
| 64 |
+
βββ metrics.jsonl
|
| 65 |
+
|
| 66 |
+
no-lb-ablation/
|
| 67 |
+
βββ final-model.safetensors # 1.1 GB -- collapsed MoE model at step 500
|
| 68 |
+
βββ best-model.safetensors # 1.1 GB -- best eval loss (step 400, pre-collapse)
|
| 69 |
+
βββ ckpt-step-500.pt # 2.9 GB -- full resume checkpoint
|
| 70 |
+
βββ config.json, run_summary.json
|
| 71 |
+
βββ metrics.jsonl
|
| 72 |
+
|
| 73 |
+
top2-main-10k/
|
| 74 |
+
βββ final-model.safetensors # 1.2 GB -- top-2 MoE model at step 9999
|
| 75 |
+
βββ best-model.safetensors # 1.2 GB -- best eval loss (step 8000)
|
| 76 |
+
βββ ckpt-step-9999.pt # 2.9 GB -- full resume checkpoint
|
| 77 |
+
βββ config.json, run_summary.json
|
| 78 |
+
βββ metrics.jsonl
|
| 79 |
```
|
| 80 |
|
| 81 |
The `.safetensors` files are the trained model weights. The `.pt` files contain the full training state for resuming runs (optimizer, LR scheduler, RNG states). The `.json` sidecars store architecture config and final eval metrics.
|
|
|
|
| 142 |
|
| 143 |
## 5. Training
|
| 144 |
|
| 145 |
+
All models trained on ~6.6M tokens across three domains, balanced to equal token counts:
|
| 146 |
|
| 147 |
| Domain | Source | Size |
|
| 148 |
|---|---|---|
|
|
|
|
| 152 |
|
| 153 |
Training config:
|
| 154 |
|
| 155 |
+
| Parameter | Dense | MoE (top-1) | MoE (top-2) | No-LB |
|
| 156 |
+
|---|---|---|---|---|
|
| 157 |
+
| Max steps | 5,000 | 10,000 | 10,000 | 2,000 (early-stopped at 500) |
|
| 158 |
+
| Batch size | 8 | 8 | 8 | 8 |
|
| 159 |
+
| Block size | 512 | 512 | 512 | 512 |
|
| 160 |
+
| Learning rate | 5e-5 | 5e-5 | 5e-5 | 5e-5 |
|
| 161 |
+
| lb_coef | β | 0.01 | 0.01 | 0.0 |
|
| 162 |
+
| noise_std | β | 0.1 | 0.1 | 0.0 |
|
| 163 |
+
| Hardware | 1Γ RTX 4090 | 1Γ RTX 4090 | 1Γ RTX 4090 | 1Γ RTX 4090 |
|
| 164 |
+
| Wall time | ~30 min | ~85 min | ~48 min | ~5 min |
|
| 165 |
+
|
| 166 |
+
Total GPU cost for all 4 runs: ~$2.79 (including setup/idle overhead).
|
| 167 |
|
| 168 |
---
|
| 169 |
|
|
|
|
| 173 |
|
| 174 |
- [Dense baseline](https://wandb.ai/sumit-ml/moe-emergence/runs/fqhfblfv)
|
| 175 |
- [MoE main run](https://wandb.ai/sumit-ml/moe-emergence/runs/j08s2d1m)
|
| 176 |
+
- [No-LB ablation](https://wandb.ai/sumit-ml/moe-emergence/runs/06pljhrv)
|
| 177 |
+
- [Top-2 directional](https://wandb.ai/sumit-ml/moe-emergence/runs/6mw6qbac)
|
| 178 |
|
| 179 |
---
|
| 180 |
|
| 181 |
## 7. Links
|
| 182 |
|
| 183 |
- Code: [github.com/sumitdotml/moe-emergence](https://github.com/sumitdotml/moe-emergence)
|
| 184 |
+
- Experiment docs: [run-004 (dense)](https://github.com/sumitdotml/moe-emergence/blob/main/docs/experiments/run-004-dense-baseline.md), [run-005 (MoE)](https://github.com/sumitdotml/moe-emergence/blob/main/docs/experiments/run-005-moe-main.md), [run-006 (no-LB)](https://github.com/sumitdotml/moe-emergence/blob/main/docs/experiments/run-006-no-lb-ablation.md), [run-007 (top-2)](https://github.com/sumitdotml/moe-emergence/blob/main/docs/experiments/run-007-top2-directional.md)
|
| 185 |
|
| 186 |
---
|
| 187 |
|