Text Generation
haok1402 commited on
Commit
8a5b5ec
Β·
verified Β·
1 Parent(s): 8e3fec9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -3
README.md CHANGED
@@ -1,3 +1,56 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # 🧨 FLAME-MoE
6
+
7
+ **FLAME-MoE** is a fully open Mixture-of-Experts (MoE) language model suite developed by Carnegie Mellon University. It provides a transparent and reproducible research platform for investigating expert routing, model scaling, and training dynamics in sparse architectures. The suite includes seven decoder-only transformer models ranging from 38M to 1.7B active parameters and reflects production-grade MoE setups with 64 experts per MoE layer, top-8 routing, and shared experts.
8
+
9
+ ---
10
+
11
+ ## πŸ” Model Summary
12
+
13
+ | Model Name | Active / Total Params | Layers | MoE Experts (Total/Active/Shared) | Training FLOPs | Tokens Trained |
14
+ | -------------------- | --------------------- | ------ | --------------------------------- | -------------- | -------------- |
15
+ | FLAME-MoE-38M-100M | 38M / 100M | 9 | 64 / 8 / 2 | 1.0e18 | 4.4B |
16
+ | FLAME-MoE-98M-349M | 98M / 349M | 9 | 64 / 8 / 2 | 3.0e18 | 5.0B |
17
+ | FLAME-MoE-115M-459M | 115M / 459M | 12 | 64 / 8 / 2 | 6.0e18 | 8.7B |
18
+ | FLAME-MoE-290M-1.3B | 290M / 1.3B | 9 | 64 / 8 / 2 | 2.0e19 | 11.4B |
19
+ | FLAME-MoE-419M-2.2B | 419M / 2.2B | 15 | 64 / 8 / 2 | 3.0e19 | 11.9B |
20
+ | FLAME-MoE-721M-3.8B | 721M / 3.8B | 12 | 64 / 8 / 2 | 8.0e19 | 18.4B |
21
+ | FLAME-MoE-1.7B-10.3B | 1.7B / 10.3B | 18 | 64 / 8 / 2 | 2.4e20 | 23.1B |
22
+
23
+ ---
24
+
25
+ ## πŸ“– Training Details
26
+
27
+ * **Framework**: Megatron-LM with Expert Parallelism (EP=8), Pipeline Parallelism (PP=1)
28
+ * **Data**: Pretrained on DataComp-LM (DCLM)
29
+ * **Batch Size**: 1024
30
+ * **Sequence Length**: 2048
31
+ * **Optimizer**: Adam
32
+ * **Scheduler**: WSD (Warmup + Decay)
33
+ * **Learning Rate**: Max 3e-4, Min 3e-5
34
+ * **Checkpoints**: 10 saved per model across training
35
+ * **Hardware**: 32Γ— NVIDIA H100 GPUs
36
+
37
+ ---
38
+
39
+ ## πŸ›  Intended Use
40
+
41
+ FLAME-MoE is developed for **research purposes only**. It supports academic study in:
42
+
43
+ * Sparse model training dynamics
44
+ * Expert routing behavior and specialization
45
+ * Scaling laws and compute-optimal design
46
+ * Benchmarking and reproducibility in MoE LLMs
47
+
48
+ It is not intended for commercial deployment or instruction-tuned downstream tasks.
49
+
50
+ ---
51
+
52
+ ## πŸ“‚ Access
53
+
54
+ All models, training scripts, logs, routing traces, and evaluation pipelines are available at:
55
+
56
+ πŸ”— [https://github.com/cmu-flame/FLAME-MoE](https://github.com/cmu-flame/FLAME-MoE)