Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers
Paper
•
2303.01610
•
Published
Turns out, you can slice up the individual MLP layers of a dense language model into even splits of experts.
What I did here:
As a result, the model behaves completely coherently when all 8 experts are activated (i.e, experts_per_tok is equal to 8.)
With 4 experts activated, it's... far less coherent.
I am interested in the prospect of continuing to train this in such a way where it can naturally handle variable expert counts, and learn to balance the features. If this works, we can potentially teach the behavior of using less computation for tokens that are trivial to predict, while using more when necessary.