Harder Tasks Need More Experts: Dynamic Routing in MoE Models
Paper • 2403.07652 • Published
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("AnLan577/Dynamic_MoE")
model = AutoModelForCausalLM.from_pretrained("AnLan577/Dynamic_MoE")Model weights For the Paper ""Harder Tasks Need More Experts: Dynamic Routing in MoE Models""
Inference Code can be found at: https://github.com/ZhenweiAn/Dynamic_MoE
@article{huang2024harder,
title={Harder Tasks Need More Experts: Dynamic Routing in MoE Models},
author={Huang, Quzhe and An, Zhenwei and Zhuang, Nan and Tao, Mingxu and Zhang, Chen and Jin, Yang and Xu, Kun and Chen, Liwei and Huang, Songfang and Feng, Yansong},
journal={arXiv preprint arXiv:2403.07652},
year={2024}
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="AnLan577/Dynamic_MoE")