Chat-UniVi commited on
Commit
afde76f
ยท
verified ยท
1 Parent(s): 5d71f88

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +140 -3
README.md CHANGED
@@ -1,3 +1,140 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts
5
+
6
+ **Paper or resources for more information:**
7
+ [[Paper]()] [[Code](https://github.com/SkyworkAI/MoE-plus-plus)]
8
+
9
+ ## โšก Overview
10
+ We introduce three types of zero-computation experts: the zero expert, copy expert, and constant expert, which correspond to discard, skip, and replace operations, respectively. Moreover, we leverage gating residuals, enabling each token to consider the pathway taken in the previous layer when selecting the appropriate experts.
11
+
12
+ <div align=center>
13
+ <img src="figures/fig2.png" width="800px">
14
+ </div>
15
+
16
+ <div align=center>
17
+ <img src="figures/fig3.png" width="800px">
18
+ </div>
19
+
20
+ ### Download URL
21
+ <div align=center>
22
+
23
+ | | HuggingFace Model |
24
+ |:-------:|:------------------------------------------------------------------------------:|
25
+ | **MoE++7B-Base** | ๐Ÿค— [MoE++7B-Base](https://huggingface.co/Chat-UniVi/MoE-Plus-Plus-7B) |
26
+ | **MoE++7B-Chat** | ๐Ÿ˜Š [Coming Soon]() |
27
+ </div>
28
+
29
+
30
+ ## ๐Ÿ˜ฎ Highlights
31
+ ### ๐Ÿ’ก Low Computing Overhead
32
+ For an MoE++ model, its computational complexity is always **less than** that of MoE models with the same number of parameters.
33
+
34
+ <div align=center>
35
+ <img src="figures/fig4.png" width="800px">
36
+ </div>
37
+
38
+ ### ๐Ÿ”ฅ High Performance & High Throughput
39
+ Extensive experimental results demonstrate that MoE++ achieves better performance while delivering 1.1~2.1x expert forward throughput compared to a vanilla MoE model of the same size, which lays a solid foundation for developing advanced and efficient MoE-related models.
40
+
41
+ <div align=center>
42
+ <img src="figures/fig7.png" width="800px">
43
+ </div>
44
+
45
+ ### ๐Ÿค— Deployment Friendly
46
+ Given that zero-computation experts have negligible parameters, we can deploy all zero-computation experts on each GPU, eliminating the significant communication overhead and expert load imbalance associated with FFN experts distributed across different GPUs.
47
+
48
+
49
+ ## ๐Ÿš€ Main Results
50
+ ### Comparisons between MoE++ and Vanilla MoE Models
51
+
52
+ <div align=center>
53
+ <img src="figures/fig9.png" width="800px">
54
+ </div>
55
+
56
+ ### Comparisons to LLMs of Equivalent Activated Parameters
57
+
58
+ <div align=center>
59
+ <img src="figures/fig10.png" width="800px">
60
+ </div>
61
+
62
+ ## ๐Ÿ˜ Why is MoE++ better than MoE?
63
+ ### Flexible Computation Allocation
64
+ MoE++ allows simple tokens to utilize fewer FFN experts, freeing up more FFN experts to focus on challenging tokens. This results in both **Reduced Computation** and **Enhanced Performance**.
65
+
66
+ <div align=center>
67
+ <img src="figures/fig5.png" width="800px">
68
+ </div>
69
+
70
+ * Verbs tend to activate a large number of FFN experts. For example, the verb "touch" activates an average of 1.77 FFN experts across all layers, approaching the upper limit of 2. This likely occurs because verbs often convey rich semantic information and frequently interact with nouns to form more complex semantic structures.
71
+ * Nouns typically activate a moderate number of FFN experts, with most nouns averaging between 1.5 and 1.7 FFN expert activations.
72
+ * Simple tokens with little semantic tend to activate a small number of FFN experts. For example, word fragments, such as "pper" and "ather", usually activate fewer than 1.5 FFN experts.
73
+
74
+ These findings confirm that MoE++ allows simple tokens to utilize fewer FFN experts, freeing up more FFN experts to focus on challenging tokens.
75
+
76
+ ### Stable Routing
77
+ Gating residuals effectively establish connections between different MoE++ layers and reduce the variance of routing scores. Meanwhile, the gating residuals do not change the mean and range of values of the routing scores. Consequently, gating residuals contribute to the stable routing of heterogeneous expert architectures in MoE++.
78
+
79
+ <div align=center>
80
+ <img src="figures/fig6.png" width="800px">
81
+ </div>
82
+
83
+
84
+ ## ๐Ÿค– API for Base Model Inference
85
+ If you want to load the model from the model hub on Hugging Face or on local, you can use the following code snippets.
86
+
87
+ ### Base Model Inference
88
+ ```python
89
+ from transformers import AutoModelForCausalLM, AutoTokenizer
90
+
91
+ question = "Hello!"
92
+
93
+ model = AutoModelForCausalLM.from_pretrained("Chat-UniVi/MoE-Plus-Plus-7B", trust_remote_code=True, device_map='auto')
94
+ tokenizer = AutoTokenizer.from_pretrained("Chat-UniVi/MoE-Plus-Plus-7B", trust_remote_code=True)
95
+
96
+ inputs = tokenizer(question, return_tensors='pt').to(model.device)
97
+ response = model.generate(inputs.input_ids, max_length=128)
98
+ print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
99
+ ```
100
+
101
+ ### Chat Model Inference
102
+ Coming soon...
103
+
104
+ ## ๐Ÿ—๏ธ Training & Validating
105
+ * The training code is built on [Skywork-MoE](https://github.com/SkyworkAI/Skywork-MoE). Unless Skywork-MoE is open source, we can't open source MoE++ alone. We will release the training code after the approval is completed.
106
+ * The evaluation is performed on multiple key benchmarks using the [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness).
107
+
108
+ ```python
109
+ # For example, test MoE++ on winogrande
110
+
111
+ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 accelerate launch \
112
+ --main_process_port 2004 -m lm_eval --model hf \
113
+ --model_args pretrained=winogrande \
114
+ --tasks winogrande \
115
+ --batch_size 1 \
116
+ --output_path Results/winogrande
117
+ ```
118
+
119
+ ## ๐Ÿ‘ Acknowledgement
120
+ * [Skywork-MoE](https://github.com/SkyworkAI/Skywork-MoE) The codebase we built upon and it is an advanced MoE language model.
121
+
122
+
123
+ ## ๐Ÿค Related Projects
124
+ * [Chat-UniVi (CVPR 2024 Highlight)](https://github.com/PKU-YuanGroup/Chat-UniVi) The model is an efficient large language and video assistant. This framework exhibits remarkable interactive capabilities between images and videos.
125
+
126
+
127
+ ## ๐Ÿ”’ License
128
+ * The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/Chat-UniVi/blob/main/LICENSE) file.
129
+ * The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violations.
130
+
131
+
132
+ ## โœ๏ธ Citation
133
+ If you find this paper useful, please consider staring ๐ŸŒŸ this repo and citing ๐Ÿ“‘ our paper:
134
+ ```
135
+ @article{jin2024moe,
136
+ title={MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts},
137
+ author={Peng Jin and Bo Zhu and Li Yuan and Shuicheng Yan},
138
+ year={2024}
139
+ }
140
+ ```