Text Generation

Add library_name and link to the Github repository

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -1,12 +1,15 @@
1
  ---
2
  license: apache-2.0
3
  pipeline_tag: text-generation
 
4
  ---
5
 
6
  # 🧨 FLAME-MoE
7
 
8
  This repository contains the model used in the paper [FLAME-MoE: A Transparent End-to-End Research Platform for Mixture-of-Experts Language Models](https://huggingface.co/papers/2505.20225).
9
 
 
 
10
  **FLAME-MoE** is a fully open Mixture-of-Experts (MoE) language model suite developed by Carnegie Mellon University. It provides a transparent and reproducible research platform for investigating expert routing, model scaling, and training dynamics in sparse architectures. The suite includes seven decoder-only transformer models ranging from 38M to 1.7B active parameters and reflects production-grade MoE setups with 64 experts per MoE layer, top-8 routing, and shared experts.
11
 
12
  ---
 
1
  ---
2
  license: apache-2.0
3
  pipeline_tag: text-generation
4
+ library_name: transformers
5
  ---
6
 
7
  # 🧨 FLAME-MoE
8
 
9
  This repository contains the model used in the paper [FLAME-MoE: A Transparent End-to-End Research Platform for Mixture-of-Experts Language Models](https://huggingface.co/papers/2505.20225).
10
 
11
+ Code: https://github.com/cmu-flame/FLAME-MoE
12
+
13
  **FLAME-MoE** is a fully open Mixture-of-Experts (MoE) language model suite developed by Carnegie Mellon University. It provides a transparent and reproducible research platform for investigating expert routing, model scaling, and training dynamics in sparse architectures. The suite includes seven decoder-only transformer models ranging from 38M to 1.7B active parameters and reflects production-grade MoE setups with 64 experts per MoE layer, top-8 routing, and shared experts.
14
 
15
  ---