Add library name, link to code
#1
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,6 +1,7 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
pipeline_tag: text-generation
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
# 🧨 FLAME-MoE
|
|
@@ -9,6 +10,8 @@ This repository contains the model described in [FLAME-MoE: A Transparent End-to
|
|
| 9 |
|
| 10 |
**FLAME-MoE** is a fully open Mixture-of-Experts (MoE) language model suite developed by Carnegie Mellon University. It provides a transparent and reproducible research platform for investigating expert routing, model scaling, and training dynamics in sparse architectures. The suite includes seven decoder-only transformer models ranging from 38M to 1.7B active parameters and reflects production-grade MoE setups with 64 experts per MoE layer, top-8 routing, and shared experts.
|
| 11 |
|
|
|
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
## 🔍 Model Summary
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
pipeline_tag: text-generation
|
| 4 |
+
library_name: transformers
|
| 5 |
---
|
| 6 |
|
| 7 |
# 🧨 FLAME-MoE
|
|
|
|
| 10 |
|
| 11 |
**FLAME-MoE** is a fully open Mixture-of-Experts (MoE) language model suite developed by Carnegie Mellon University. It provides a transparent and reproducible research platform for investigating expert routing, model scaling, and training dynamics in sparse architectures. The suite includes seven decoder-only transformer models ranging from 38M to 1.7B active parameters and reflects production-grade MoE setups with 64 experts per MoE layer, top-8 routing, and shared experts.
|
| 12 |
|
| 13 |
+
All code, training logs, and model checkpoints are available at: [https://github.com/cmu-flame/FLAME-MoE](https://github.com/cmu-flame/FLAME-MoE)
|
| 14 |
+
|
| 15 |
---
|
| 16 |
|
| 17 |
## 🔍 Model Summary
|