sumit commited on
Commit
4049aa7
Β·
1 Parent(s): 4fa3232

updated model card with ablation results and all 4 runs

Browse files
Files changed (1) hide show
  1. README.md +44 -18
README.md CHANGED
@@ -21,13 +21,15 @@ pipeline_tag: text-generation
21
 
22
  Checkpoints from a research project studying expert specialization in Mixture-of-Experts models. I fine-tuned GPT-2 small on three domains -- code, math, and prose -- to see whether experts naturally specialize by domain when given the right routing incentives.
23
 
24
- Short answer: they do. MoE beats the dense baseline by 3.6% overall and 14% on math, with zero expert collapse across 10,000 training steps.
25
 
26
  ---
27
 
28
  ## 1. Results
29
 
30
- | Metric | Dense Baseline | MoE (8 experts, top-1) | Delta |
 
 
31
  |---|---|---|---|
32
  | eval/loss | 2.157 | 2.080 | -3.6% |
33
  | loss_code | 1.554 | 1.521 | -2.1% |
@@ -37,6 +39,13 @@ Short answer: they do. MoE beats the dense baseline by 3.6% overall and 14% on m
37
 
38
  Math benefits the most from expert routing. Prose is the one domain where dense wins; diverse web text doesn't lend itself to clean expert specialization. The MoE model crossed the dense baseline at step ~3,600 (36% of training).
39
 
 
 
 
 
 
 
 
40
  ---
41
 
42
  ## 2. Files
@@ -45,14 +54,28 @@ Math benefits the most from expert routing. Prose is the one domain where dense
45
  dense-baseline/
46
  β”œβ”€β”€ final-model.safetensors # 622 MB -- dense GPT-2, 124M params
47
  β”œβ”€β”€ final-model.json # metadata sidecar (config, metrics)
48
- β”œβ”€β”€ ckpt-step-4999.pt # 1.4 GB -- full resume checkpoint (optimizer, scheduler, RNG)
49
  └── metrics.jsonl # per-step training + eval metrics
50
 
51
  moe-main/
52
  β”œβ”€β”€ final-model.safetensors # 1.1 GB -- MoE GPT-2, 257M params (8 experts Γ— 4 layers)
53
- β”œβ”€β”€ final-model.json # metadata sidecar (config, metrics)
54
  β”œβ”€β”€ ckpt-step-9999.pt # 2.9 GB -- full resume checkpoint
55
- └── metrics.jsonl # per-step training + eval metrics
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ```
57
 
58
  The `.safetensors` files are the trained model weights. The `.pt` files contain the full training state for resuming runs (optimizer, LR scheduler, RNG states). The `.json` sidecars store architecture config and final eval metrics.
@@ -119,7 +142,7 @@ Routing uses the Straight-Through Estimator. Forward pass routes to one expert w
119
 
120
  ## 5. Training
121
 
122
- Both models trained on ~6.6M tokens across three domains, balanced to equal token counts:
123
 
124
  | Domain | Source | Size |
125
  |---|---|---|
@@ -129,17 +152,18 @@ Both models trained on ~6.6M tokens across three domains, balanced to equal toke
129
 
130
  Training config:
131
 
132
- | Parameter | Dense | MoE |
133
- |---|---|---|
134
- | Max steps | 5,000 | 10,000 |
135
- | Batch size | 2 Γ— 4 grad accum = 8 | 2 Γ— 4 grad accum = 8 |
136
- | Block size | 512 | 512 |
137
- | Learning rate | 5e-5 | 5e-5 |
138
- | Warmup | 10% | 10% |
139
- | Schedule | Cosine | Cosine |
140
- | Hardware | 1Γ— RTX 4090 24GB | 1Γ— RTX 4090 24GB |
141
- | Wall time | ~30 min | ~85 min |
142
- | Throughput | ~25.7k tok/s | ~14.2k tok/s |
 
143
 
144
  ---
145
 
@@ -149,13 +173,15 @@ Training curves are on Weights & Biases:
149
 
150
  - [Dense baseline](https://wandb.ai/sumit-ml/moe-emergence/runs/fqhfblfv)
151
  - [MoE main run](https://wandb.ai/sumit-ml/moe-emergence/runs/j08s2d1m)
 
 
152
 
153
  ---
154
 
155
  ## 7. Links
156
 
157
  - Code: [github.com/sumitdotml/moe-emergence](https://github.com/sumitdotml/moe-emergence)
158
- - Experiment docs: [run-004 (dense)](https://github.com/sumitdotml/moe-emergence/blob/main/docs/experiments/run-004-dense-baseline.md), [run-005 (MoE)](https://github.com/sumitdotml/moe-emergence/blob/main/docs/experiments/run-005-moe-main.md)
159
 
160
  ---
161
 
 
21
 
22
  Checkpoints from a research project studying expert specialization in Mixture-of-Experts models. I fine-tuned GPT-2 small on three domains -- code, math, and prose -- to see whether experts naturally specialize by domain when given the right routing incentives.
23
 
24
+ Short answer: they do. MoE beats the dense baseline by 3.6% overall and 14% on math, with zero expert collapse across 10,000 training steps. Two ablation runs confirmed that load balancing loss is essential (without it, one expert captures 73.6% of tokens by step 500) and that top-2 routing provides negligible improvement over top-1.
25
 
26
  ---
27
 
28
  ## 1. Results
29
 
30
+ ### Main comparison
31
+
32
+ | Metric | Dense Baseline | MoE (top-1) | Delta |
33
  |---|---|---|---|
34
  | eval/loss | 2.157 | 2.080 | -3.6% |
35
  | loss_code | 1.554 | 1.521 | -2.1% |
 
39
 
40
  Math benefits the most from expert routing. Prose is the one domain where dense wins; diverse web text doesn't lend itself to clean expert specialization. The MoE model crossed the dense baseline at step ~3,600 (36% of training).
41
 
42
+ ### Ablations
43
+
44
+ | Run | What it tests | Result |
45
+ |---|---|---|
46
+ | No-LB ablation | Remove load balancing loss (`lb_coef=0.0`) | Expert collapse at step 500. Single expert handles 73.6% of tokens. Z-loss alone doesn't prevent it. |
47
+ | Top-2 directional | Route to 2 experts instead of 1 | eval/loss=2.077 vs top-1's 2.080, which is a 0.14% difference. Not worth 2x expert compute. |
48
+
49
  ---
50
 
51
  ## 2. Files
 
54
  dense-baseline/
55
  β”œβ”€β”€ final-model.safetensors # 622 MB -- dense GPT-2, 124M params
56
  β”œβ”€β”€ final-model.json # metadata sidecar (config, metrics)
57
+ β”œβ”€β”€ ckpt-step-4999.pt # 1.4 GB -- full resume checkpoint
58
  └── metrics.jsonl # per-step training + eval metrics
59
 
60
  moe-main/
61
  β”œβ”€β”€ final-model.safetensors # 1.1 GB -- MoE GPT-2, 257M params (8 experts Γ— 4 layers)
62
+ β”œβ”€β”€ final-model.json # metadata sidecar
63
  β”œβ”€β”€ ckpt-step-9999.pt # 2.9 GB -- full resume checkpoint
64
+ └── metrics.jsonl
65
+
66
+ no-lb-ablation/
67
+ β”œβ”€β”€ final-model.safetensors # 1.1 GB -- collapsed MoE model at step 500
68
+ β”œβ”€β”€ best-model.safetensors # 1.1 GB -- best eval loss (step 400, pre-collapse)
69
+ β”œβ”€β”€ ckpt-step-500.pt # 2.9 GB -- full resume checkpoint
70
+ β”œβ”€β”€ config.json, run_summary.json
71
+ └── metrics.jsonl
72
+
73
+ top2-main-10k/
74
+ β”œβ”€β”€ final-model.safetensors # 1.2 GB -- top-2 MoE model at step 9999
75
+ β”œβ”€β”€ best-model.safetensors # 1.2 GB -- best eval loss (step 8000)
76
+ β”œβ”€β”€ ckpt-step-9999.pt # 2.9 GB -- full resume checkpoint
77
+ β”œβ”€β”€ config.json, run_summary.json
78
+ └── metrics.jsonl
79
  ```
80
 
81
  The `.safetensors` files are the trained model weights. The `.pt` files contain the full training state for resuming runs (optimizer, LR scheduler, RNG states). The `.json` sidecars store architecture config and final eval metrics.
 
142
 
143
  ## 5. Training
144
 
145
+ All models trained on ~6.6M tokens across three domains, balanced to equal token counts:
146
 
147
  | Domain | Source | Size |
148
  |---|---|---|
 
152
 
153
  Training config:
154
 
155
+ | Parameter | Dense | MoE (top-1) | MoE (top-2) | No-LB |
156
+ |---|---|---|---|---|
157
+ | Max steps | 5,000 | 10,000 | 10,000 | 2,000 (early-stopped at 500) |
158
+ | Batch size | 8 | 8 | 8 | 8 |
159
+ | Block size | 512 | 512 | 512 | 512 |
160
+ | Learning rate | 5e-5 | 5e-5 | 5e-5 | 5e-5 |
161
+ | lb_coef | β€” | 0.01 | 0.01 | 0.0 |
162
+ | noise_std | β€” | 0.1 | 0.1 | 0.0 |
163
+ | Hardware | 1Γ— RTX 4090 | 1Γ— RTX 4090 | 1Γ— RTX 4090 | 1Γ— RTX 4090 |
164
+ | Wall time | ~30 min | ~85 min | ~48 min | ~5 min |
165
+
166
+ Total GPU cost for all 4 runs: ~$2.79 (including setup/idle overhead).
167
 
168
  ---
169
 
 
173
 
174
  - [Dense baseline](https://wandb.ai/sumit-ml/moe-emergence/runs/fqhfblfv)
175
  - [MoE main run](https://wandb.ai/sumit-ml/moe-emergence/runs/j08s2d1m)
176
+ - [No-LB ablation](https://wandb.ai/sumit-ml/moe-emergence/runs/06pljhrv)
177
+ - [Top-2 directional](https://wandb.ai/sumit-ml/moe-emergence/runs/6mw6qbac)
178
 
179
  ---
180
 
181
  ## 7. Links
182
 
183
  - Code: [github.com/sumitdotml/moe-emergence](https://github.com/sumitdotml/moe-emergence)
184
+ - Experiment docs: [run-004 (dense)](https://github.com/sumitdotml/moe-emergence/blob/main/docs/experiments/run-004-dense-baseline.md), [run-005 (MoE)](https://github.com/sumitdotml/moe-emergence/blob/main/docs/experiments/run-005-moe-main.md), [run-006 (no-LB)](https://github.com/sumitdotml/moe-emergence/blob/main/docs/experiments/run-006-no-lb-ablation.md), [run-007 (top-2)](https://github.com/sumitdotml/moe-emergence/blob/main/docs/experiments/run-007-top2-directional.md)
185
 
186
  ---
187