mishig HF Staff commited on
Commit
3909aa5
·
verified ·
1 Parent(s): 3c166b4

Add 1 files

Browse files
Files changed (1) hide show
  1. 2410/2410.05357.md +3458 -0
2410/2410.05357.md ADDED
@@ -0,0 +1,3458 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild
2
+
3
+ URL Source: https://arxiv.org/html/2410.05357
4
+
5
+ Markdown Content:
6
+ Back to arXiv
7
+
8
+ This is experimental HTML to improve accessibility. We invite you to report rendering errors.
9
+ Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off.
10
+ Learn more about this project and help improve conversions.
11
+
12
+ Why HTML?
13
+ Report Issue
14
+ Back to Abstract
15
+ Download PDF
16
+ Abstract
17
+ 1Introduction
18
+ 2Related Works
19
+ 3Methodology
20
+ 4Model Merging and Model Mixture for LLMs
21
+ 5Superior Recipes to Aggregate LLM Knowledge
22
+ 6Discussion with Other LLM Aggregation Techniques
23
+ 7Limitations
24
+ 8Conclusion
25
+ References
26
+ License: arXiv.org perpetual non-exclusive license
27
+ arXiv:2410.05357v2 [cs.LG] 05 Dec 2024
28
+ Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild
29
+ Xinyu Zhao∗1,   Guoheng Sun∗2,   Ruisi Cai∗3,   Yukun Zhou∗4,  Pingzhi Li∗1,   Peihao Wang∗3
30
+ Bowen Tan5,  Yexiao He2,  Li Chen6,  Yi Liang6,  Beidi Chen5,  Binhang Yuan4
31
+ Hongyi Wang†7,  Ang Li†2,  Zhangyang Wang†3,  Tianlong Chen†1
32
+ 1UNC CH   2UMD   3UT Austin   4HKUST   5CMU   6Google  7Rutgers University
33
+ {xinyu,pingzhi,tianlong}@cs.unc.edu, {ghsun,yexiaohe,angliece}@umd.edu
34
+ {ruisi.cai,peihaowang,atlaswang}@utexas.edu
35
+ yzhoufw@connect.ust.hk, {btan2,beidic}@andrew.cmu.edu
36
+ li.lizliz.chen@gmail.com, yiliang@google.com
37
+ biyuan@ust.hk, hongyi.wang.001@rutgers.edu
38
+ 0∗Equal Contribution †Equal Supervision
39
+ Abstract
40
+
41
+ As Large Language Models (LLMs) excel across tasks and specialized domains, scaling LLMs based on existing models has gained significant attention, which is challenged by potential performance drop when combining disparate models. Various techniques have been proposed to aggregate pre-trained LLMs, including model merging, Mixture-of-Experts, and stacking. Despite their merits, a comprehensive comparison and synergistic application of them to a diverse model zoo is yet to be adequately addressed. In light of this research gap, this paper introduces Model-GLUE, a holistic LLM scaling guideline. First, our work starts with a benchmarking of existing LLM scaling techniques, especially selective merging, and variants of mixture. Utilizing the insights from the benchmark results, we formulate a strategy for the selection and aggregation of a heterogeneous model zoo characterizing different architectures and initialization. Our methodology involves clustering mergeable models, selecting a merging strategy, and integrating model clusters through model-level mixture. Finally, evidenced by our experiments on a diverse Llama-2-based model zoo, Model-GLUE shows an average performance enhancement of 5.61%, achieved without additional training. Codes are available at https://github.com/Model-GLUE/Model-GLUE.
42
+
43
+ 1Introduction
44
+
45
+ Large Language Models (LLMs) have demonstrated unparalleled capability in a diverse array of natural language tasks, encompassing commonsense reasoning, question answering, and specialized domains such as mathematics and programming OpenAI (2023); Rozière et al. (2023); Touvron et al. (2023). The effectiveness of LLMs is based on the scaling law, which posits that proportionally increasing model and training data size leads to enhanced model performance Kaplan et al. (2020). Nevertheless, the computation overhead and data requirement surge as LLM continues to scale. With the widespread of open-sourced general or specialized LLMs, aggregating existing models to construct a more versatile LLM emerges as an economical alternative to training a larger LLM from scratch  Ding et al. (2024); Goddard et al. (2024); Wan et al. (2024). This not only mitigates the computation cost but also leverages the collective advancements of previous efforts in building LLMs.
46
+
47
+ Within different methods to combine existing LLMs, a major class is merging Ainsworth et al. (2022); Akiba et al. (2024); Ilharco et al. (2023); Jang et al. (2024); Matena and Raffel (2022); Wortsman et al. (2022); Yadav et al. (2023); Yu et al. (2024). Model merging combines multiple models into a single one of the same size through weight-space transformation. Wortsman et al. (2022) first propose to merge a few fine-tuned models as a training trick for the flat loss-landscape, and Ilharco et al. (2023) extends it to multi-task scenario, both of which employ the simple averaging. Other works propose more complicated merging methods, leveraging weight sparsity Yadav et al. (2023); Yu et al. (2024) and non-uniform coefficient Akiba et al. (2024); Matena and Raffel (2022). However, they assume that all candidate models are “useful” when merging. While this may hold for small-sized designed model collections, it may not be the case in real-world scenarios given a large and divergent model zoo. How to ensure the benefits of merging different model zoo sizes and similarities, and exclude “harmful” candidates, remains underexplored.
48
+
49
+ Since merging is limited to the same model structures and initial weights, another alternative is Mixture-of-Experts (MoE) Goddard et al. (2024). MoE is a conditional computation architecture that activates only a subset of model parameters for each specific input example Shazeer et al. (2017). MoE LLMs have already demonstrated performance and computational efficiency advantages over their dense counterparts Fedus et al. (2022); Jiang et al. (2024); Lepikhin et al. (2020); Zoph et al. (2022). In particular, we use a broader term “mixture” to denote the aggregation of existing expert LLMs according to the MoE paradigm, which has been successfully implemented in some recent practices Sukhbaatar et al. (2024); Wan et al. (2024); Wang et al. (2023a). However, these implementations neglect the inherent flexibility of MoE to integrate different expert models, especially those groups that do not work with merging. Also, the difference and possible synergy between merging and mixing have not been thoroughly investigated. Based on the above challenges, our primary research question is formulated as:
50
+
51
+ (Q) Is it feasible to establish a benchmark for selecting and aggregating Large Language Models (LLMs) from an extensive and varied model zoo based on current state-of-the-art model merging and mixture, thereby enhancing the overall competence of the final model?
52
+
53
+ Figure 1:Overview of Model-GLUE, composing of (1) Model Clustering based on architecture and weight similarity; (2) Model Filtering and Searching for merging; (3) Model Merging within each cluster; (4) Model Level Mixture of merged models.
54
+
55
+ To address (Q), we present Model-GLUE, a comprehensive benchmark and set of guidelines for LLM scaling. Model-GLUE is the first work for LLM scaling encompassing a wide range of model group sizes and variability, with a principal emphasis on the merging and mixture methodologies, and also discussion of model stacking. We first delve into merging scheduling, analyzing strategies for identifying potentially detrimental model candidates and various merging techniques. We then explore a variety of model mixtures as an alternative to merging, covering different mixture granularity, routers architecture, routing input inputs, etc. Building upon the insights from model merging and mixture, Model-GLUE introduces an efficient and robust LLM scaling recipe for a diverse set of models. It starts with model clustering and progressive merging, and then the mixture of all clusters, thereby integrating similar knowledge from the model zoo while highlighting the respective strengths of each cluster. Our contributions are outlined as follows:
56
+
57
+
58
+ We conduct a comprehensive benchmarking analysis of LLM merging strategies, beginning with identifying each model’s contribution and then followed by filtering out detrimental candidates. Our findings are validated on a range of LLMs, from a few to over a dozen.
59
+
60
+
61
+ We assess model mixture for four distinct variants: mixture level, router design, router input, and hybrid mixture. We have derived several principles for model mixture and discussed its utility as a solution for scaling models incompatible with merging.
62
+
63
+
64
+ We introduce a recipe for progressively combining LLM models, Model-GLUE, based on findings on merging and mixture benchmarks. It first conducts selective merging and then model mixture, outperforming the best single model on general reasoning, mathematics, and coding tasks.
65
+
66
+
67
+ Extensive experimental results on Llama-2-based models validate our proposal. For instance, Model-GLUE achieves an average increase of
68
+ 5.61
69
+ %
70
+ across chatting, mathematics, and coding benchmarks compared to the best single LLM.
71
+
72
+ 2Related Works
73
+ Model Merging.
74
+
75
+ Merging methods can be divided into zero-shot merging and merge-then-train approaches. Early zero-shot merging methods are weight averaging and Linear Mode Connectivity Nagarajan and Kolter (2021); Wortsman et al. (2022). Later popular methods include Task Arithmetic Ilharco et al. (2023) manipulating task vectors, and TIES Yadav et al. (2023) addressing parameter interference through trimming and conflict resolution. DARE Yu et al. (2024) optimizes parameters selectively to enhance merging without extra training. Others focus on geometric properties of weights for merging Shoemake (1985); Jang et al. (2024). Recent Evolutionary Model Merge Akiba et al. (2024) improves weight configuration and data token pathways during inference. For the merge-then-train approach, Fisher merging Matena and Raffel (2022) uses the Fisher information matrix to weigh model parameters to maximize their joint likelihood. RegMean Jin et al. (2023) adapts the linear merging to each linear layer while averaging embeddings and biases. However, both zero-shot and merge-then-train approaches are less effective for models initialized differently. Ainsworth et al. (2022); Imfeld et al. (2023); Verma and Elbayad (2024); Xu et al. (2024) exploit the permutation symmetry inherent in neural networks on small to large models. To boost merging efficiency, our focus on merging lies in the zero-shot merging of models with the same architecture and initialization.
76
+
77
+ Model Mixture.
78
+
79
+ Mixture-of-Experts (MoE) Shazeer et al. (2017) scales up neural networks by utilizing router networks to activate different parts of the model for different input tokens. Its integration with Large Language Models (LLMs) has gained notable recognition for its exceptional generative capabilities and unparalleled efficiency. Recently, Mixtral Jiang et al. (2024) demonstrates that the MoE methodology can achieve the performance of dense LLM counterparts while employing significantly fewer active parameters. Model mixture combines a collection of dense LLM models, irrespective of their sizes, into a MoE model. Some studies discover model fusion Wan et al. (2024); Wang et al. (2023a) integrating the outputs of expert models to exploit the unique insights into the data distribution. Recent initiatives include Branch-Train-MiX Sukhbaatar et al. (2024), which starts with a seed-dense LLM and then branches out, facilitating the parallel training of expert models. These trained dense models are subsequently incorporated as experts within MoE layers, with other parameters being averaged. However, this approach is limited to dense models that share identical architectures and sizes. Most recently, UltraFuser Ding et al. (2024) introduces a token-level soft gating mechanism that blends model outputs, with a two-stage training strategy.
80
+
81
+ Model Stacking.
82
+
83
+ Model stacking concatenates two models along the depth dimension. In the era of LLM, Wu et al. (2024) reuses pre-trained LLaMA layers and resets the output projection to zero in stacking. Kim et al. (2023) shows dropping middle layers in stacking yields superior performance. Wang et al. (2023c) prove that stacking could help recover model-parameter scaling laws with insufficient data. Reddi et al. (2023) demonstrated that gradual stacking leads to significant improvements in wall-clock time during the training of few-shot learners. Theoretically, Agarwal et al. (2024) proved that model stacking could be interpreted as Nesterov acceleration in network optimization. However, all the aforementioned stacking methods involve no more than two kinds of models and primarily focus on the benefits of training acceleration. In this work, we explore the possibility of stacking two heterogeneous models to combine their capabilities.
84
+
85
+ Model Scaling Tools
86
+
87
+ There have been several tools for model mixture and merging, and for scaling models using existing LLMs. For example, Mergekit is an open-source library designed to facilitate the application of model merging strategies and the construction of MoE Goddard et al. (2024). As a representative of unified LLM, Beyonder is a set of mixtures of merged and single LLMs for different tasks1. However, there is still a lack of a comprehensive benchmark of the various mixing and merging techniques and practical guidance on how to unify groups of LLMs at different levels of similarity.
88
+
89
+ 3Methodology
90
+ 3.1Preliminaries
91
+
92
+ In this study, we consider a collection of
93
+ 𝑛
94
+ existing Large Language Models (LLMs), denoted as
95
+ {
96
+ 𝙼
97
+ 1
98
+ ,
99
+
100
+ ,
101
+ 𝙼
102
+ 𝑛
103
+ }
104
+ , which have been fine-tuned on diverse corpora. Our objective is to outline a systematic approach towards producing one stronger aggregated model across all knowledge domains. Specifically, the unified LLM incorporates single LLMs mainly through merging and mixture.
105
+
106
+ 3.2Model Merging
107
+ Figure 2:Pipeline for model merging, as well as an overview of merging methods and search strategies.
108
+ The concept of Model Merging
109
+
110
+ Model merging is integrating multiple models into one unified model in the weight space, compatible with LLMs of the same initialization Goddard et al. (2024). Popular merging methods can be divided into two types: ❶ Merging entire model weights represented by Model Soup Wortsman et al. (2022) (Linear), SLERP Shoemake (1985), and Model Stock Jang et al. (2024); ❷ Task-vector based merging represented by Task Arithmetic Ilharco et al. (2023), TIES Yadav et al. (2023), and DARE Yu et al. (2024). The former method directly interpolates model weights, while the latter subtracts the pre-trained model from the fine-tuned model to obtain task vectors and utilizes sparsity and consistency of parameters for refined merging. The basic Linear interpolation merging is defined as
111
+ 𝑤
112
+ 𝑢
113
+ =
114
+
115
+ 𝑖
116
+ =
117
+ 1
118
+ 𝑛
119
+ 𝑠
120
+ 𝑖
121
+
122
+ 𝑤
123
+ 𝑖
124
+ , where
125
+ 𝑤
126
+ 𝑖
127
+ and
128
+ 𝑠
129
+ 𝑖
130
+ are the corresponding model weights and merging coefficient of
131
+ 𝙼
132
+ 𝑖
133
+
134
+ {
135
+ 𝙼
136
+ 1
137
+ ,
138
+
139
+
140
+ 𝙼
141
+ 𝑛
142
+ }
143
+ .
144
+
145
+ Selective Merging Pipeline
146
+
147
+ Merging can be easily applied to models with the same architecture, but does not guarantee better results. Therefore, before searching for the merging coefficient, we first pre-process the models by clustering all the models using cosine similarity and then searching for the optimal merging coefficient and method within each cluster. Details are explained in Appendix A.5.
148
+
149
+ Heuristic and Evolutionary Strategies
150
+
151
+ The heuristic strategy is for searching and filtering potential harmful models for merging. It is based on greedy search, involving three variants: ❶ Heuristic-Average retain the candidate if there is an improvement on the proxy dataset in each round of merging. ❷ Heuristic-Coefficient builds upon Heuristic-Average, by combining the previously merged model with a new candidate using different coefficients in each round. ❸ Heuristic-Similarity selects the candidate model with the highest or lowest similarity and conducts a coefficient search to combine it with the previously merged model. Detailed heuristic strategy algorithms can be found in Appendix A.1 Heuristic strategies perform pairwise merging of models, while many methods allow for merging multiple models at once. Therefore, we also consider jointly optimizing all model coefficients using the Evolutionary Strategy.
152
+
153
+ 3.3Model Mixture
154
+ Figure 3:The overview and decision flow of three model mixture levels and their selection philosophy.
155
+ The concept of Model Mixture.
156
+
157
+ Model mixture resembles Mixture-of-Experts(MoE). It scales a LLM with multiple pre-trained LLM experts and further extends beyond traditional token-dependent Feed-Forward-Network (FFN) MoE designs Shazeer et al. (2017). A mixture model is composed of MoE modules and the rest shared parameters. A MoE module consists of a router
158
+ 𝒢
159
+
160
+ (
161
+
162
+ )
163
+ and
164
+ 𝑛
165
+ expert networks
166
+ {
167
+ 𝙴
168
+ 1
169
+ ,
170
+
171
+ ,
172
+ 𝙴
173
+ 𝑛
174
+ }
175
+ .
176
+ 𝒢
177
+
178
+ (
179
+
180
+ )
181
+ takes a router input
182
+ 𝒙
183
+ 𝓖
184
+ and generate expert assignment for each token input
185
+ 𝒙
186
+ . Then MoE outputs a weighted sum of experts’ outputs as
187
+ 𝙼𝚘𝙴
188
+
189
+ (
190
+ 𝑥
191
+ ,
192
+ 𝑥
193
+ 𝒢
194
+ )
195
+ =
196
+
197
+ 𝑖
198
+ =
199
+ 1
200
+ 𝑛
201
+ 𝒢
202
+
203
+ (
204
+ 𝑥
205
+ 𝒢
206
+ )
207
+ 𝑖
208
+
209
+ 𝙴
210
+ 𝑖
211
+
212
+ (
213
+ 𝑥
214
+ )
215
+ . We experiment with several variations of Model Mixture, classified as follows:
216
+
217
+ Mixture levels.
218
+
219
+ Traditional Mixture-of-expert models replace the dense FFN layer at each Transformer block with an MoE module, which is only compatible with LLMs that share the same architecture. Besides this ❶ FFN level mixture, we also experiment with two coarse-grained mixtures. ❷ Block level mixture create MoE module by aggregating Transformer blocks with the same index from each LLM as experts and add a block-wise router. Block level mixture is applicable to models with different architecture but the same embedding space, layer amounts, and intermediate dimension. ❸ Model level mixture take each LLM as an expert and use a router at mixture model input. Model level mixture covers any LLM groups not compatible with FFN and block level mixture. In particular, the model level mixture is similar but not identical to the model ensemble, as the former can be sparse and focus more on efficiency and exploit single LLM expertise, while the latter produces general results by averaging or majority voting overall model outputs. Details can be found in Appendix A.3
220
+
221
+ Router design.
222
+
223
+ The router network of many MoE studies adheres to a ❶ linear router Shazeer et al. (2017). We experiment with another more complex ❷ MLP router to examine whether this router design leads to better performance. It is implemented by two sequential FFN and a ReLU function in between, inspired by Shen et al. (2023); Liang et al. (2022). For the routing method, we employ Top-K selection to all routers, which activates the K experts corresponding to the K largest softmaxed router output Shazeer et al. (2017); Shen et al. (2023).
224
+
225
+ Router input.
226
+
227
+ We adopt two types of router input for different levels of model mixture: ❶ Token input for FFN level mixture, where router input is the same as model input; ❷ Sample input for block and model level mixture, where we calculate the average embedding as the sample input
228
+ 𝑥
229
+ 𝒢
230
+ =
231
+
232
+ 𝑖
233
+ =
234
+ 1
235
+ 𝑛
236
+ 𝑥
237
+ 𝑛
238
+ , and route tokens of a sample to the same expert based on sample routing. The sample routing avoids inconsistency in attention operation.
239
+
240
+ Hybrid mixture.
241
+
242
+ To explore LLM scaling in between model merging and model mixture, we propose the hybrid mixture as an intermediate solution. In a hybrid mixture, the bottom few layers of all single LLMs are merged, and then the rest layers follow any of the mixture level designs.
243
+
244
+ 4Model Merging and Model Mixture for LLMs
245
+ 4.1Benchmark Datasets and Configs
246
+ Model Zoo.
247
+
248
+ Table 1 provides an overview of the Model Zoo. For benchmarking model merging and mixture at different sizes of model zoo, we construct
249
+ 5
250
+ groups of Llama-2-based
251
+ 7
252
+ B chat LLMs where the number of models
253
+
254
+ [
255
+ 2
256
+ ,
257
+ 4
258
+ ,
259
+ 8
260
+ ,
261
+ 12
262
+ ,
263
+ 16
264
+ ]
265
+ . In addition, to examine the difference in combining models from different domains, we introduce Which4 (chat), consisting of four chat models, as a supplement setting where no single model has a superior advantage in a specific domain.
266
+
267
+ After comparing the two ways of model scaling, we propose Model-GLUE combining selective merging and model mixture, which is tested on the largest model family Which16. Which16 is developed on
268
+ 12
269
+ mergeable Llama-2-based models in Which12, which additionally includes four highly performant domain-specific models that cannot be merged: three CodeLlama-based models, two of which are code models and one is a math model, and LLM360/CrystalChat. In particular, LLM360/CrystalChat use different architecture, initialization, and training data from Llama-2-based models, while CodeLlama series, initialized from Llama-2, adopt continuous pretraining rather than fine-tuning as models in Which12.
270
+
271
+ Table 1:All of the models in our model zoos and their performance. For each model zoo, we denote those models that belong to it with a colored star ✧: ✧ for Which2, ✧ for Which4 (Chat), ✧ for Which4 (Domain), ✧ for Which8, ✧ for Which12, and ✧ for Which16.
272
+ Model Model Zoo ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
273
+ migtissera/Synthia-7B-v1.2 Mukherjee et al. (2023); Tissera (2023); Touvron et al. (2023) ✧✧✧✧✧
274
+ 55.03
275
+ %
276
+
277
+ 73.72
278
+ %
279
+
280
+ 48.18
281
+ %
282
+
283
+ 24.03
284
+ %
285
+
286
+ 17.80
287
+ %
288
+
289
+ 13.41
290
+ %
291
+
292
+ 38.70
293
+ %
294
+
295
+ neuralmagic/Llama-2-7b-evolcodealpaca Touvron et al. (2023) ✧✧✧✧
296
+ 49.57
297
+ %
298
+
299
+ 72.45
300
+ %
301
+
302
+ 41.70
303
+ %
304
+
305
+ 09.02
306
+ %
307
+
308
+ 25.60
309
+ %
310
+
311
+ 31.71
312
+ %
313
+
314
+ 38.34
315
+ %
316
+
317
+ teknium/OpenHermes-7B Touvron et al. (2023) ✧✧✧✧✧
318
+ 56.40
319
+ %
320
+
321
+ 73.88
322
+ %
323
+
324
+ 47.84
325
+ %
326
+
327
+ 09.25
328
+ %
329
+
330
+ 22.80
331
+ %
332
+
333
+ 19.51
334
+ %
335
+
336
+ 38.28
337
+ %
338
+
339
+ PygmalionAI/pygmalion-2-7b Touvron et al. (2023) ✧✧✧✧
340
+ 54.10
341
+ %
342
+
343
+ 75.37
344
+ %
345
+
346
+ 48.38
347
+ %
348
+
349
+ 17.29
350
+ %
351
+
352
+ 19.20
353
+ %
354
+
355
+ 15.24
356
+ %
357
+
358
+ 38.26
359
+ %
360
+
361
+ meta-llama/Llama-2-7b-chat-hf Touvron et al. (2023) ✧✧✧✧✧✧
362
+ 54.10
363
+ %
364
+
365
+ 71.27
366
+ %
367
+
368
+ 47.28
369
+ %
370
+
371
+ 23.05
372
+ %
373
+
374
+ 17.00
375
+ %
376
+
377
+ 13.41
378
+ %
379
+
380
+ 37.68
381
+ %
382
+
383
+ Severus27/BeingWell_llama2_7b Touvron et al. (2023) ✧✧
384
+ 54.95
385
+ %
386
+
387
+ 72.30
388
+ %
389
+
390
+ 46.19
391
+ %
392
+
393
+ 22.29
394
+ %
395
+
396
+ 13.40
397
+ %
398
+
399
+ 13.41
400
+ %
401
+
402
+ 37.09
403
+ %
404
+
405
+ meta-math/MetaMath-7B-V1.0 Touvron et al. (2023); Yu et al. (2023) ✧✧✧✧
406
+ 47.35
407
+ %
408
+
409
+ 70.24
410
+ %
411
+
412
+ 41.58
413
+ %
414
+
415
+ 59.06
416
+ %
417
+
418
+ 01.40
419
+ %
420
+
421
+ 01.22
422
+ %
423
+
424
+ 36.81
425
+ %
426
+
427
+ lmsys/vicuna-7b-v1.5 Zheng et al. (2023); Touvron et al. (2023) ✧✧✧✧✧✧
428
+ 53.75
429
+ %
430
+
431
+ 70.56
432
+ %
433
+
434
+ 49.78
435
+ %
436
+
437
+ 19.11
438
+ %
439
+
440
+ 06.00
441
+ %
442
+
443
+ 19.51
444
+ %
445
+
446
+ 36.45
447
+ %
448
+
449
+ garage-bAInd/Platypus2-7B Hu et al. (2022); Lee et al. (2023); Touvron et al. (2023) ✧✧✧
450
+ 55.12
451
+ %
452
+
453
+ 74.03
454
+ %
455
+
456
+ 49.82
457
+ %
458
+
459
+ 02.50
460
+ %
461
+
462
+ 19.00
463
+ %
464
+
465
+ 14.63
466
+ %
467
+
468
+ 35.85
469
+ %
470
+
471
+ GOAT-AI/GOAT-7B-Community Bekbayev et al. (2023); Touvron et al. (2023) ✧✧
472
+ 49.06
473
+ %
474
+
475
+ 72.22
476
+ %
477
+
478
+ 49.23
479
+ %
480
+
481
+ 09.70
482
+ %
483
+
484
+ 05.40
485
+ %
486
+
487
+ 09.76
488
+ %
489
+
490
+ 32.56
491
+ %
492
+
493
+ stanford-oval/Llama-2-7b-WikiChat-fused Semnani et al. (2023); Touvron et al. (2023) ✧✧
494
+ 50.94
495
+ %
496
+
497
+ 68.59
498
+ %
499
+
500
+ 39.13
501
+ %
502
+
503
+ 00.00
504
+ %
505
+
506
+ 13.80
507
+ %
508
+
509
+ 04.27
510
+ %
511
+
512
+ 29.45
513
+ %
514
+
515
+ cognitivecomputations/dolphin-llama2-7b Touvron et al. (2023) ✧✧
516
+ 42.66
517
+ %
518
+
519
+ 65.35
520
+ %
521
+
522
+ 46.52
523
+ %
524
+
525
+ 10.69
526
+ %
527
+
528
+ 00.80
529
+ %
530
+
531
+ 02.44
532
+ %
533
+
534
+ 28.08
535
+ %
536
+
537
+ meta-math/MetaMath-Llemma-7B Azerbayev et al. (2023); Yu et al. (2023) ✧
538
+ 46.76
539
+ %
540
+
541
+ 64.33
542
+ %
543
+
544
+ 46.33
545
+ %
546
+
547
+ 62.40
548
+ %
549
+
550
+ 42.00
551
+ %
552
+
553
+ 31.10
554
+ %
555
+
556
+ 48.82
557
+ %
558
+
559
+ codellama/CodeLlama-7b-Instruct-hf Rozière et al. (2024) ✧
560
+ 43.52
561
+ %
562
+
563
+ 65.11
564
+ %
565
+
566
+ 41.83
567
+ %
568
+
569
+ 17.06
570
+ %
571
+
572
+ 40.00
573
+ %
574
+
575
+ 33.70
576
+ %
577
+
578
+ 40.20
579
+ %
580
+
581
+ ise-uiuc/Magicoder-S-CL-7B Wei et al. (2023); Rozière et al. (2024) ✧
582
+ 43.77
583
+ %
584
+
585
+ 63.38
586
+ %
587
+
588
+ 35.94
589
+ %
590
+
591
+ 14.33
592
+ %
593
+
594
+ 50.20
595
+ %
596
+
597
+ 63.41
598
+ %
599
+
600
+ 45.17
601
+ %
602
+
603
+ LLM360/CrystalChat Liu et al. (2023) ✧
604
+ 51.54
605
+ %
606
+
607
+ 70.64
608
+ %
609
+
610
+ 52.39
611
+ %
612
+
613
+ 32.45
614
+ %
615
+
616
+ 38.80
617
+ %
618
+
619
+ 35.37
620
+ %
621
+
622
+ 46.87
623
+ %
624
+
625
+ For merging benchmarks, we experiment with a larger model zoo, namely Which4, Which8, and Which12 with models filtered from Which16. For model mixture with higher computational cost, we experiment with Which2 and Which4.
626
+
627
+ Benchmarks
628
+
629
+ We assess all models on three categories of benchmarks: (i) Commonsense reasoning using ARC Clark et al. (2018), WinoGrande Sakaguchi et al. (2019), and MMLU Hendrycks et al. (2020); (ii) Mathematics ability on GSM8K Cobbe et al. (2021); (iii) Coding ability on MBPP Austin et al. (2021) and HumanEval Chen et al. (2021). The evaluation scripts are based on lm-eval 2 for commonsense and mathematical reasoning and bigcode-eval 3 for coding datasets. All benchmarks are under the MIT License.
630
+
631
+ 4.2Implementation Details for Merging
632
+ Proxy Dataset.
633
+
634
+ Since the performance of merging model is not necessarily positive, we need a proxy dataset to determine whether to reject a particular round of merging in the Heuristic Strategy, or to compute the model fitness in the Evolutionary Strategy. (i) For MBPP, we select its validation set. (ii) For HumanEval, due to the unavailability of a validation set and its smaller size, we select
635
+ 20
636
+ %
637
+ of the JavaScript version of HumanEvalPack Muennighoff et al. (2023). (iii) For other tasks, we chose the small-scale datasets released by tinybenchmarks Polo et al. (2024) under MIT License.
638
+
639
+ Model Zoo and Clustering.
640
+
641
+ The Merging Bench considers
642
+ 3
643
+ model zoos: Which4, Which8, and Which16. We first cluster the model zoos based on cosine similarity with a threshold of
644
+ 0.95
645
+ . Due to Which16 contains models that cannot be merged, we choose the mergable family obtained through clustering which is referred to as Which12.
646
+
647
+ Details of Heuristic Strategy and Evolutionary Strategy.
648
+
649
+ For Heuristic Strategy, to reduce the search space, we only evaluated Linear interpolation and the range of coefficient search is
650
+ {
651
+ 0.1
652
+ ,
653
+ 0.2
654
+
655
+
656
+
657
+ 0.9
658
+ }
659
+ . In Heuristic-Similarity, we use the average similarity of all weights as the criterion for selecting models in each round. For Evolutionary Strategy, we refer to the setting of Evolutionary Model Merge Akiba et al. (2024), which utilizes the CMA-ES Hansen (2006) algorithm implemented by Optuna Akiba et al. (2019). In contrast, all parameters are randomly initialized, and the fitness values are defined as the accuracy of the proxy dataset. The optimization was conducted for 200 trials in all scenarios.
660
+
661
+ 4.3Model Merging Benchmark Results
662
+ Figure 4:(a) Comparison between different Heuristic Strategies on Which12, Which8, Which4. (b) Comparison of different model merging methods in Evolutionary Strategy.
663
+
664
+ We start our discussion by examining the effectiveness of existing approaches in depth. Despite existing merging methods focus on improving the merging techniques, their effectiveness is usually validated basedt on small-scale model zoos. For instance, Ilharco et al. (2023) primarily focuses on the linear interpolation between two fine-tuned models, while Akiba et al. (2024) explores merging three.
665
+
666
+ Current model practitioners typically download pre-trained models, fine-tune them on their own data or with unique techniques for specific downstream tasks, and then upload them back to the public. This practice results in a large number of open-source models being available, yet they remain underutilized by current merging methods. To this end, instead of solely discussing the merging technique, we explore an orthogonal question: Can we scale up the size of model zoo to cover more models, and design an automatic merging technique to benefit from the inclusion?
667
+
668
+ Failure Case of Existing Approaches.
669
+
670
+ To begin with, we provide a motivating example to show the failure case of the existing approach. We consider the three models, Llama-2-Chat Touvron et al. (2023), Vicuna Zheng et al. (2024) and CodeLlama Rozière et al. (2023), all initialized with the same base model, Llama-2 Touvron et al. (2023). We merge Vicuna and CodeLlama with Llama-2-Chat, respectively, and report the evaluation results in Table 14 in Appendix B.2. We evaluate
671
+ 6
672
+ representative merging techniques implemented in mergekit Goddard et al. (2024), including linear interpolation Wortsman et al. (2022), SLERP Shoemake (1985), Model Stock Jang et al. (2024), Task Arithmetic Ilharco et al. (2023), DARE Xu et al. (2024), and TIES Yadav et al. (2023). By merging Llama-2-chat and Vicuna, the merged model achieves better performance compared to any single model, while merging Llama-2-chat and CodeLlama fails to outperform all single models and may even lead to a significant drop in performance, which is also mentioned by Xu et al. (2024). The results indicate the potential severe performance drop when including un-mergeable new model in merging (e.g. CodeLlama). Even if it is obtained from the same pre-trained checkpoint. Such failure case motivates us to design the strategy to automatically select models for merging, and exclude the models that are unable to merge.
673
+
674
+ In the following paragraphs, we explore several solutions tailored for large-scale model merging. These variations address different resource and speed requirements. The introduction of these methods is organized around answering the following key questions.
675
+
676
+ Q1: Does handcrafted rules apply to automated model selection and which one performs best? A: Yes, by a greedy search approach.
677
+
678
+ In this section, we explore three potential heuristics for model selection and report the results in Figure 4(a). We include the performance of the “best single model” (the model participant before merging that achieves the best averaged performance). We additionally validate the performance of heuristic-based merging technique, which are detailed in Section 3.2. As indicated by the results, the merging technique based on Heuristic-Coefficient yields consistently superior performance when the model zoo is large. For Which4, Heuristic-Average achieved better performance, while Heuristic-Coefficient performed poorly. This is primarily because the domain-specific models in Which4 exhibit similar performances and are indispensable.
679
+
680
+ Q2: How to utilize Evolutionary Strategy for coefficient optimization in model merging?
681
+
682
+ .
683
+
684
+ We divide the problem into the following sub-questions: (i) Which merging method is most compatible with Evolutionary Strategy? (ii) Can finer-grained optimization lead to a better merged model? (iii) How to efficiently merge in a large model zoo? For (i), A: simpler methods such as Linear and Task Arithmetic are more competitive. We compared four methods: Linear, Task Arithmetic, DARE, and TIES. As shown in Figure 4(b), Linear merging consistently achieves great results. However, when the parameters to be optimized are small, Task Arithmetic performs slightly better than Linear. Under a fixed computational budget, due to the doubling of parameters to be optimized, DARE and TIES exhibit slightly lower performance compared to other methods. For (ii), A: Yes, but we need a larger computational budget. We group adjacent
685
+ 𝑛
686
+ decoder layers together, where they share the same coefficients. The group size
687
+ 𝑛
688
+
689
+ [
690
+ 32
691
+ ,
692
+ 8
693
+ ,
694
+ 4
695
+ ,
696
+ 1
697
+ ]
698
+ . When
699
+ 𝑛
700
+ =
701
+ 8
702
+ , better results were achieved compared to
703
+ 𝑛
704
+ =
705
+ 32
706
+ , as shown in Table 17. However, as we further decreased the group size, the performance slightly declined. This could be attributed to our relatively small budget. For (iii), A: Use Heuristic Strategy to roughly search for coefficients and then fine-tune the coefficients using Evolutionary Strategy. As shown in Table 18, the combination of the two strategies resulted in better results with fewer trials. For implementation details, please refer to Appendix A.2.
707
+
708
+ 4.4Implementation Details for Mixture
709
+ Table 2:Model mixture methods and their abbreviations used in our study. Methods applicable for models with distinct architectures are highlighted in gray.
710
+ Abbreviation Mix. Level Router Router Input Hybrid
711
+ F-L-T FFN Linear Token ✗
712
+ Hybrid F-L-T FFN Linear Token ✓
713
+ F-L-S FFN Linear Sample ✗
714
+ F-M-S FFN MLP Sample ✗
715
+ B-L-S Block Linear Sample ✗
716
+ B-M-S Block MLP Sample ✗
717
+ M-L-S Model Linear Sample ✗
718
+ Model Zoo and Router Initialization.
719
+
720
+ In Mixture Bench, we experiment with Which2 and Which4 model settings. For router design, we mainly adopt a training-free linear layer router initialized from the prompt vector, as previous studies have demonstrated its effectiveness in the zero-shot MoE model Goddard et al. (2024). For specific prompt settings, we refer to the Beyonder model series 4. For the routing algorithm, we use Top-
721
+ 1
722
+ routing for Which2 and Block level mixture and Model-level mixture for Which4, and Top-
723
+ 2
724
+ for Which4 FFN level mixture.
725
+
726
+ Post-mixture training.
727
+
728
+ For MLP router that are randomly initialized, we fine-tune the model by language modeling on the GPT4All dataset  Anand et al. (2023), only updating the router. We use the GPT4All Anand et al. (2023) dataset for post-mixture router training, which is under Apache 2.0 License. For all the router training experiments, we apply the batch size of
729
+ 128
730
+ , a cosine learning rate scheduler, the learning rate of
731
+ 5
732
+
733
+ 𝑒
734
+
735
+ 5
736
+ , and the epochs of
737
+ 1
738
+ .
739
+
740
+ Mixture Method Abbreviations.
741
+
742
+ To simplify the description, we use abbreviations to denote different mixture methods, as in Table 2.
743
+
744
+ 4.5Model Mixture Benchmark Results
745
+
746
+ In this section, we attempt to answer five main research questions about mixture variants: mixture level, router design, router input, and hybrid merging. We also explore the mixing of very different models that cannot be merged as the previous probe in our next Model-GLUE recipe that combines merging and blending for LLM scaling.
747
+
748
+ Q1: At which level does the model mixture manifest its utmost effectiveness?
749
+
750
+ .
751
+
752
+ Table 3:Comparison of different mixture levels. For each task in each model zoo, we highlight the performance best in each model zoo in bold.
753
+ Model ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
754
+ Which2
755
+ Best Single Model
756
+ 54.27
757
+ %
758
+
759
+ 71.51
760
+ %
761
+
762
+ 47.24
763
+ %
764
+
765
+ 21.30
766
+ %
767
+
768
+ 18.00
769
+ %
770
+
771
+ 13.06
772
+ %
773
+
774
+ 37.68
775
+ %
776
+
777
+ F-L-S
778
+ 52.82
779
+ %
780
+
781
+ 70.80
782
+ %
783
+
784
+ 50.04
785
+ %
786
+
787
+ 23.12
788
+ %
789
+
790
+ 19.00
791
+ %
792
+
793
+ 17.68
794
+ %
795
+
796
+ 38.91
797
+ %
798
+
799
+ B-L-S
800
+ 52.73
801
+ %
802
+
803
+ 70.01
804
+ %
805
+
806
+ 49.90
807
+ %
808
+
809
+ 19.94
810
+ %
811
+
812
+ 18.84
813
+ %
814
+
815
+ 15.85
816
+ %
817
+
818
+ 37.88
819
+ %
820
+
821
+ M-L-S
822
+ 54.44
823
+ %
824
+
825
+ 72.38
826
+ %
827
+
828
+ 50.51
829
+ %
830
+
831
+ 22.21
832
+ %
833
+
834
+ 20.00
835
+ %
836
+
837
+ 20.73
838
+ %
839
+
840
+ 40.04
841
+ %
842
+
843
+ Which4
844
+ Best Single Model
845
+ 55.03
846
+ %
847
+
848
+ 73.72
849
+ %
850
+
851
+ 48.33
852
+ %
853
+
854
+ 24.26
855
+ %
856
+
857
+ 17.80
858
+ %
859
+
860
+ 13.41
861
+ %
862
+
863
+ 38.70
864
+ %
865
+
866
+ F-L-S
867
+ 53.75
868
+ %
869
+
870
+ 73.88
871
+ %
872
+
873
+ 47.97
874
+ %
875
+
876
+ 34.87
877
+ %
878
+
879
+ 21.80
880
+ %
881
+
882
+ 23.17
883
+ %
884
+
885
+ 42.57
886
+ %
887
+
888
+ B-L-S
889
+ 52.65
890
+ %
891
+
892
+ 74.66
893
+ %
894
+
895
+ 47.05
896
+ %
897
+
898
+ 21.15
899
+ %
900
+
901
+ 20.40
902
+ %
903
+
904
+ 14.63
905
+ %
906
+
907
+ 38.42
908
+ %
909
+
910
+ M-L-S
911
+ 49.06
912
+ %
913
+
914
+ 72.14
915
+ %
916
+
917
+ 41.81
918
+ %
919
+
920
+ 60.05
921
+ %
922
+
923
+ 17.60
924
+ %
925
+
926
+ 15.24
927
+ %
928
+
929
+ 42.65
930
+ %
931
+
932
+ A: Model level mixture is consistently better. Our comparative analysis of the {FFN, block, model} level mixture, all employing the linear router and the sample routing strategy as presented in Table 3, consistently demonstrates the superiority of the Model level mixture under Which2 and Which4 setting. This could be attributed to the design that Model Level Mixture route each sample to one expert model, thereby avoiding the conflicts between different expert models and maximizing the expertise of the most appropriate experts. Since the experts are not derived from the same pre-training process, directly merging their inconsistent representation spaces will affect the performance of the mixture model, with more expert parameters leading to worse results. This is especially evident for Block-level Mixture, as the routing is performed at each transformer layer and the representation is fed into different expert blocks in series, causing confusion when switching between different expert knowledge.
933
+
934
+ Q2: Does more complex router design brings better results?
935
+
936
+ .
937
+
938
+ Table 4:Comparison between linear and MLP routers on Which2 setting. We highlight better performance within each pair in bold.
939
+ Model ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
940
+ F-L-T
941
+ 53.41
942
+ %
943
+
944
+ 70.48
945
+ %
946
+
947
+ 50.74
948
+ %
949
+
950
+ 23.28
951
+ %
952
+
953
+ 20.80
954
+ %
955
+
956
+ 16.46
957
+ %
958
+
959
+ 39.20
960
+ %
961
+
962
+ F-M-T
963
+ 53.58
964
+ %
965
+
966
+ 72.06
967
+ %
968
+
969
+ 50.01
970
+ %
971
+
972
+ 21.92
973
+ %
974
+
975
+ 17.40
976
+ %
977
+
978
+ 17.68
979
+ %
980
+
981
+ 38.78
982
+ %
983
+
984
+ B-L-S
985
+ 52.73
986
+ %
987
+
988
+ 70.01
989
+ %
990
+
991
+ 49.90
992
+ %
993
+
994
+ 19.94
995
+ %
996
+
997
+ 18.84
998
+ %
999
+
1000
+ 15.85
1001
+ %
1002
+
1003
+ 37.88
1004
+ %
1005
+
1006
+ B-M-S
1007
+ 51.53
1008
+ %
1009
+
1010
+ 70.56
1011
+ %
1012
+
1013
+ 49.41
1014
+ %
1015
+
1016
+ 19.94
1017
+ %
1018
+
1019
+ 16.60
1020
+ %
1021
+
1022
+ 14.02
1023
+ %
1024
+
1025
+ 37.01
1026
+ %
1027
+
1028
+ A: Not necessary, as the linear router outperforms the MLP router. From Table  4, the performances of the linear router without additional training slightly surpass MLP router models, i.e., F-L-T over F-M-T, B-L-S over B-M-S. Specifically, linear router models are better at math and coding datasets, validating prompt vector is effective in assorting samples from different domains, which is otherwise too implicit to learn via direct language modeling.
1029
+
1030
+ Q3: Does model mixture directly works on unmergeable models?
1031
+
1032
+ .
1033
+
1034
+ Table 5:Comparison of the mixture of a unmergeable model pair (Llama-2-7b-chat and CrystalChat). We highlight the better performance in bold.
1035
+ Model ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
1036
+ Best Single Model
1037
+ 52.05
1038
+ %
1039
+
1040
+ 69.46
1041
+ %
1042
+
1043
+ 50.77
1044
+ %
1045
+
1046
+ 27.22
1047
+ %
1048
+
1049
+ 39.60
1050
+ %
1051
+
1052
+ 35.98
1053
+ %
1054
+
1055
+ 45.85
1056
+ %
1057
+
1058
+ M-L-S
1059
+ 50.68
1060
+ %
1061
+
1062
+ 69.77
1063
+ %
1064
+
1065
+ 50.08
1066
+ %
1067
+
1068
+ 27.82
1069
+ %
1070
+
1071
+ 33.80
1072
+ %
1073
+
1074
+ 30.48
1075
+ %
1076
+
1077
+ 43.77
1078
+ %
1079
+
1080
+ A: No. We directly apply the setting of Which2 Model level mixture to Llama-2-7b-chat and CrystalChat, an unmergeable model pair with different architectures and initialization. As shown in Table 5, the performance is slightly behind the best single model. This may be due to simple prompts and direct mixture, as it fails to coordinate the divergence between drastically different models. We evaluate more complex prompts for the same model pair and the mixture model outperforms, see Table 19 for more information.
1081
+
1082
+ Q4: Which router input is better, token-level or sample-level?
1083
+
1084
+ .
1085
+
1086
+ Table 6:Comparison of different router input designs. Which4 includes one group with chatting models (Chat) and another with different domain models (Domain) . We highlight the best performing mixture methods in bold.
1087
+ Model ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
1088
+ Which2
1089
+ Best Single Model
1090
+ 54.27
1091
+ %
1092
+
1093
+ 71.51
1094
+ %
1095
+
1096
+ 47.24
1097
+ %
1098
+
1099
+ 21.30
1100
+ %
1101
+
1102
+ 18.00
1103
+ %
1104
+
1105
+ 13.06
1106
+ %
1107
+
1108
+ 37.68
1109
+ %
1110
+
1111
+ F-L-T
1112
+ 53.41
1113
+ %
1114
+
1115
+ 70.48
1116
+ %
1117
+
1118
+ 50.74
1119
+ %
1120
+
1121
+ 23.28
1122
+ %
1123
+
1124
+ 20.80
1125
+ %
1126
+
1127
+ 16.46
1128
+ %
1129
+
1130
+ 39.20
1131
+ %
1132
+
1133
+ F-L-S
1134
+ 52.82
1135
+ %
1136
+
1137
+ 70.80
1138
+ %
1139
+
1140
+ 50.04
1141
+ %
1142
+
1143
+ 23.12
1144
+ %
1145
+
1146
+ 19.00
1147
+ %
1148
+
1149
+ 17.68
1150
+ %
1151
+
1152
+ 38.91
1153
+ %
1154
+
1155
+ Which4
1156
+ Best Single Model
1157
+ 55.03
1158
+ %
1159
+
1160
+ 73.72
1161
+ %
1162
+
1163
+ 48.33
1164
+ %
1165
+
1166
+ 24.26
1167
+ %
1168
+
1169
+ 17.80
1170
+ %
1171
+
1172
+ 13.41
1173
+ %
1174
+
1175
+ 38.70
1176
+ %
1177
+
1178
+ Chat F-L-T
1179
+ 55.63
1180
+ %
1181
+
1182
+ 72.77
1183
+ %
1184
+
1185
+ 50.28
1186
+ %
1187
+
1188
+ 23.88
1189
+ %
1190
+
1191
+ 20.00
1192
+ %
1193
+
1194
+ 22.56
1195
+ %
1196
+
1197
+ 40.85
1198
+ %
1199
+
1200
+ Chat F-L-S
1201
+ 53.75
1202
+ %
1203
+
1204
+ 70.96
1205
+ %
1206
+
1207
+ 49.78
1208
+ %
1209
+
1210
+ 20.32
1211
+ %
1212
+
1213
+ 20.40
1214
+ %
1215
+
1216
+ 20.12
1217
+ %
1218
+
1219
+ 39.22
1220
+ %
1221
+
1222
+ Domain F-L-T
1223
+ 55.72
1224
+ %
1225
+
1226
+ 74.11
1227
+ %
1228
+
1229
+ 48.32
1230
+ %
1231
+
1232
+ 30.17
1233
+ %
1234
+
1235
+ 22.00
1236
+ %
1237
+
1238
+ 20.12
1239
+ %
1240
+
1241
+ 41.74
1242
+ %
1243
+
1244
+ Domain F-L-S
1245
+ 53.75
1246
+ %
1247
+
1248
+ 73.88
1249
+ %
1250
+
1251
+ 47.97
1252
+ %
1253
+
1254
+ 34.87
1255
+ %
1256
+
1257
+ 21.80
1258
+ %
1259
+
1260
+ 23.17
1261
+ %
1262
+
1263
+ 42.57
1264
+ %
1265
+
1266
+ A: Not quite different. Token input suits a mixture of the same domain models. Table 6 shows the performance token-based and sample-based routing are pretty close. In particular, for Which2 and Which4 (Chat) where models are all trained for general chatting purposes, token routing outperforms, whereas sample routing is better for default Which4 (Domain) with differently specialized models. This may result from divergence of model knowledge and representation spaces will cause conflicts in fine-grained token routing.
1267
+
1268
+ Q5: Is it feasible for hybrid mixtures to provide enhancements?
1269
+
1270
+ .
1271
+
1272
+ Table 7:Comparison between F-L-T methods with and without hybrid mixture technique. We highlight the best performing mixture methods in bold.
1273
+ Model ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
1274
+ Which2
1275
+ Best Single Model
1276
+ 54.27
1277
+ %
1278
+
1279
+ 71.51
1280
+ %
1281
+
1282
+ 47.24
1283
+ %
1284
+
1285
+ 21.30
1286
+ %
1287
+
1288
+ 18.00
1289
+ %
1290
+
1291
+ 13.06
1292
+ %
1293
+
1294
+ 37.68
1295
+ %
1296
+
1297
+ F-L-T
1298
+ 53.41
1299
+ %
1300
+
1301
+ 70.48
1302
+ %
1303
+
1304
+ 50.74
1305
+ %
1306
+
1307
+ 23.28
1308
+ %
1309
+
1310
+ 20.80
1311
+ %
1312
+
1313
+ 16.46
1314
+ %
1315
+
1316
+ 39.20
1317
+ %
1318
+
1319
+ Hybrid F-L-T
1320
+ 54.44
1321
+ %
1322
+
1323
+ 71.19
1324
+ %
1325
+
1326
+ 50.45
1327
+ %
1328
+
1329
+ 23.96
1330
+ %
1331
+
1332
+ 21.80
1333
+ %
1334
+
1335
+ 18.29
1336
+ %
1337
+
1338
+ 40.02
1339
+ %
1340
+
1341
+ Which4
1342
+ Best Single Model
1343
+ 55.03
1344
+ %
1345
+
1346
+ 73.72
1347
+ %
1348
+
1349
+ 48.33
1350
+ %
1351
+
1352
+ 24.26
1353
+ %
1354
+
1355
+ 17.80
1356
+ %
1357
+
1358
+ 13.41
1359
+ %
1360
+
1361
+ 38.70
1362
+ %
1363
+
1364
+ F-L-T
1365
+ 55.72
1366
+ %
1367
+
1368
+ 74.11
1369
+ %
1370
+
1371
+ 48.32
1372
+ %
1373
+
1374
+ 30.17
1375
+ %
1376
+
1377
+ 22.00
1378
+ %
1379
+
1380
+ 20.12
1381
+ %
1382
+
1383
+ 41.74
1384
+ %
1385
+
1386
+ Hybrid F-L-T
1387
+ 54.86
1388
+ %
1389
+
1390
+ 73.80
1391
+ %
1392
+
1393
+ 48.23
1394
+ %
1395
+
1396
+ 37.53
1397
+ %
1398
+
1399
+ 24.30
1400
+ %
1401
+
1402
+ 23.17
1403
+ %
1404
+
1405
+ 43.65
1406
+ %
1407
+
1408
+ A: Yes. Our experiments on F-L-T with v.s. without the hybrid mixture, as detailed in Table 7, demonstrate that the hybrid mixture significantly improves performance on average and simultaneously reduces the memory overhead during inference. This improvement may be attributed to the higher sensitivity of the initial transformer blocks. Avoiding using MoE for these blocks can yield performance gains, as suggested by a few previous works as well Dai et al. (2024); Rajbhandari et al. (2022). Surprisingly, our results show that the hybrid F-L-T model consistently outperforms the standard F-L-T on math and code tasks. Our further analysis indicates that this improvement might be because of the conversational nature of the content in GSM8K, MBPP, and HumanEval datasets, which appears to challenge the routing mechanisms within the initial transformer blocks, leading to ineffective expert specialization.
1409
+
1410
+ 5Superior Recipes to Aggregate LLM Knowledge
1411
+ 5.1Model Merging v.s. Mixture
1412
+ Q1: For a mergeable model zoo, how should we choose between merging and mixture?
1413
+
1414
+ For limited computational resources and similar models, merging is always a simple and effective method. For the domain-specific models, mixture can bring greater improvements.
1415
+
1416
+ Table 8:Comparison between the best merging approach v.s. the best mixture approach on Which4 (Domain) and Which4 (Chat).
1417
+ Model ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
1418
+ Best Single Model
1419
+ 55.03
1420
+ %
1421
+
1422
+ 73.72
1423
+ %
1424
+
1425
+ 48.18
1426
+ %
1427
+
1428
+ 24.03
1429
+ %
1430
+
1431
+ 17.80
1432
+ %
1433
+
1434
+ 13.41
1435
+ %
1436
+
1437
+ 38.70
1438
+ %
1439
+
1440
+ Which4 (Domain)
1441
+ Merging
1442
+ 54.01
1443
+ %
1444
+
1445
+ 73.64
1446
+ %
1447
+
1448
+ 47.39
1449
+ %
1450
+
1451
+ 43.75
1452
+ %
1453
+
1454
+ 22.40
1455
+ %
1456
+
1457
+ 21.95
1458
+ %
1459
+
1460
+ 43.86
1461
+ %
1462
+
1463
+ Mixture
1464
+ 54.86
1465
+ %
1466
+
1467
+ 74.11
1468
+ %
1469
+
1470
+ 48.23
1471
+ %
1472
+
1473
+ 49.81
1474
+ %
1475
+
1476
+ 18.40
1477
+ %
1478
+
1479
+ 18.29
1480
+ %
1481
+
1482
+ 43.95
1483
+ %
1484
+
1485
+ Which4 (Chat)
1486
+ Merging
1487
+ 56.23
1488
+ %
1489
+
1490
+ 73.72
1491
+ %
1492
+
1493
+ 50.51
1494
+ %
1495
+
1496
+ 25.85
1497
+ %
1498
+
1499
+ 21.00
1500
+ %
1501
+
1502
+ 21.95
1503
+ %
1504
+
1505
+ 41.54
1506
+ %
1507
+
1508
+ Mixture
1509
+ 53.75
1510
+ %
1511
+
1512
+ 70.96
1513
+ %
1514
+
1515
+ 49.80
1516
+ %
1517
+
1518
+ 19.94
1519
+ %
1520
+
1521
+ 19.80
1522
+ %
1523
+
1524
+ 20.73
1525
+ %
1526
+
1527
+ 39.16
1528
+ %
1529
+
1530
+ Detailed results are presented in Table 8. For Which4 (Domain), due to the appropriately designed linear routers, model mixture can fully leverage various domain-specific models, thus slightly outperforming merging. For Which4 (Chat), we adopt the optimal settings from Which4 (Domain) and only change the model zoo. Since individual models do not exhibit superior capabilities in a single domain, it is challenging to design suitable routers at a low cost. Therefore, mixture performed significantly worse compared to merging. Furthermore, although combining the homogeneous models in Which4 (Chat) brings some improvement, we can see that Which4 (Domain) overall outperforms Which4 (Chat). Therefore, increasing the diversity among the models will make a greater contribution to the combined model.
1531
+
1532
+ 5.2Model-GLUE: selective merging then model mixture for better LLM scaling
1533
+ Q2: How to combine models with greater differences in an extensive and varied model zoo?
1534
+
1535
+
1536
+
1537
+ In Which16, a larger and more diverse model zoo , some models cannot be merged due to structural differences and models that would degrade in performance when merged with other models. Therefore, we first cluster the models based on cosine similarity. Within each mergeable family, we perform either merging or mixture. We initially employ heuristic strategies of merging and report the best results (i.e., Full Merging) in Table 9. The Llama-2 family (i.e., Which12) consists of up to
1538
+ 12
1539
+ models, so directly combining them through the mixture is inefficient. Thus, we only consider models selected by merging and report the results of F-L-T Mixture. From Table 9, we can observe that Full Merging outperforms F-L-T Mixture.
1540
+
1541
+ Table 9:Comparison between the best single model, Full Merging, Full Mixture and our Model-GLUE.
1542
+ Model ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
1543
+ Best Single Model
1544
+ 46.76
1545
+ %
1546
+
1547
+ 64.33
1548
+ %
1549
+
1550
+ 46.33
1551
+ %
1552
+
1553
+ 62.40
1554
+ %
1555
+
1556
+ 42.00
1557
+ %
1558
+
1559
+ 31.10
1560
+ %
1561
+
1562
+ 48.82
1563
+ %
1564
+
1565
+ Full Merging
1566
+ 55.12
1567
+ %
1568
+
1569
+ 73.64
1570
+ %
1571
+
1572
+ 50.13
1573
+ %
1574
+
1575
+ 39.35
1576
+ %
1577
+
1578
+ 21.80
1579
+ %
1580
+
1581
+ 21.34
1582
+ %
1583
+
1584
+ 43.56
1585
+ %
1586
+
1587
+ F-L-T Mixture
1588
+ 54.69
1589
+ %
1590
+
1591
+ 73.32
1592
+ %
1593
+
1594
+ 48.74
1595
+ %
1596
+
1597
+ 35.18
1598
+ %
1599
+
1600
+ 22.60
1601
+ %
1602
+
1603
+ 21.34
1604
+ %
1605
+
1606
+ 42.65
1607
+ %
1608
+
1609
+ Model-GLUE
1610
+ 51.62
1611
+ %
1612
+
1613
+ 70.56
1614
+ %
1615
+
1616
+ 51.85
1617
+ %
1618
+
1619
+ 53.53
1620
+ %
1621
+
1622
+ 47.20
1623
+ %
1624
+
1625
+ 51.83
1626
+ %
1627
+
1628
+ 54.43
1629
+ %
1630
+
1631
+ Therefore, we selected Full Merging as the representative model for the Llama-2 family and combined it with other models that could not be merged by model mixture. On average, the Model-GLUE demonstrates a
1632
+ 5.61
1633
+ %
1634
+ improvement over the Best Single Model. More details are presented in Appendix A.4.
1635
+
1636
+ 6Discussion with Other LLM Aggregation Techniques
1637
+
1638
+ Thus far, we mainly focus on two LLM aggregation techniques: model merging and mixture. In this section, we discuss other potential techniques that could help scaling existing LLMs.
1639
+
1640
+ Model Stacking.
1641
+
1642
+ Research has demonstrated that stacking a model itself can accelerate training convergence as opposed to training a model of double the size from scratch Gong et al. (2019); Gu et al. (2020); Wang et al. (2023b); Wu et al. (2024); Kim et al. (2023). This concept can be extended naturally to stack multiple models as one larger model. Our experimental results indicate that model stacking with lightweight fine-tuning can yield superior performance compared to various merging and mixture models. For instance, stacking 7B Llama-2-chat and Vicuna can achieve
1643
+
1644
+ 55
1645
+ %
1646
+ on the MMLU benchmark. When compared to model mixture, model stacking offers less flexibility in terms of model choices. Although the resulting architecture is more standardized than MoE, increasing the model depth through stacking also results in higher latency than mixture models where subnetworks infer in parallel. Additionally, model stacking does not simplify the design space, such as determining whether, which, and how many layers should be dropped when stacking two heterogeneous models. We conducted a preliminary investigation employing model stacking techniques to address two primary research questions: (
1647
+ 1
1648
+ ) Can model stacking effectively combine the capabilities of two distinct models and surpass the performance of self-stacking a single model? (
1649
+ 2
1650
+ ) What is the impact of layer dropping on stacking performance?
1651
+
1652
+ Specifically, we examine the relationship between the number of dropped layers (
1653
+ 𝐾
1654
+ ) and the resulting downstream task accuracy. To this end, we selected 7B Llama-2-Chat and Vicuna as the base models and fine-tuned the stacked models for 10 billion tokens. The obtained results are presented in Table 10. In the initial two rows, we report the performance of the two base models, revealing that Llama and Vicuna exhibit advantages on different datasets. In the subsequent two rows, we observe that stacking dissimilar models generally outperforms self-stacked models, and the weaknesses of one model can be compensated for by another stronger one. Moving forward, we explored the effects of varying the number of dropped layers. Our findings indicate that even when dumping half of each model (
1655
+ 𝐾
1656
+ =
1657
+ 16
1658
+ ), the stacked 7B models can still significantly enhance performance across tasks.
1659
+
1660
+ Table 10:Comparison of different model stacking configurations.
1661
+ Model ARC WinoGrande MMLU Hellaswag TruthfulQA
1662
+ Llama-2-chat
1663
+ 54.10
1664
+ %
1665
+
1666
+ 71.27
1667
+ %
1668
+
1669
+ 47.28
1670
+ %
1671
+
1672
+ 78.71
1673
+ %
1674
+
1675
+ 45.32
1676
+ %
1677
+
1678
+ Vicuna
1679
+ 53.75
1680
+ %
1681
+
1682
+ 70.56
1683
+ %
1684
+
1685
+ 49.78
1686
+ %
1687
+
1688
+ 77.19
1689
+ %
1690
+
1691
+ 50.36
1692
+ %
1693
+
1694
+ Llama / Llama (
1695
+ 𝐾
1696
+ =
1697
+ 8
1698
+ )
1699
+ 53.92
1700
+ %
1701
+
1702
+ 69.14
1703
+ %
1704
+
1705
+ 52.76
1706
+ %
1707
+
1708
+ 73.74
1709
+ %
1710
+
1711
+ 46.36
1712
+ %
1713
+
1714
+ Llama / Vicuna (
1715
+ 𝐾
1716
+ =
1717
+ 8
1718
+ )
1719
+ 56.14
1720
+ %
1721
+
1722
+ 70.80
1723
+ %
1724
+
1725
+ 55.20
1726
+ %
1727
+
1728
+ 73.67
1729
+ %
1730
+
1731
+ 46.84
1732
+ %
1733
+
1734
+ Llama / Vicuna (
1735
+ 𝐾
1736
+ =
1737
+ 12
1738
+ )
1739
+ 55.42
1740
+ %
1741
+
1742
+ 69.45
1743
+ %
1744
+
1745
+ 53.55
1746
+ %
1747
+
1748
+ 73.62
1749
+ %
1750
+
1751
+ 45.59
1752
+ %
1753
+
1754
+ Llama / Vicuna (
1755
+ 𝐾
1756
+ =
1757
+ 16
1758
+ )
1759
+ 54.35
1760
+ %
1761
+
1762
+ 69.69
1763
+ %
1764
+
1765
+ 52.52
1766
+ %
1767
+
1768
+ 73.75
1769
+ %
1770
+
1771
+ 45.92
1772
+ %
1773
+
1774
+ Llama / Vicuna (
1775
+ 𝐾
1776
+ =
1777
+ 20
1778
+ )
1779
+ 39.59
1780
+ %
1781
+
1782
+ 61.33
1783
+ %
1784
+
1785
+ 44.93
1786
+ %
1787
+
1788
+ 62.10
1789
+ %
1790
+
1791
+ 42.90
1792
+ %
1793
+
1794
+ Llama / Vicuna (
1795
+ 𝐾
1796
+ =
1797
+ 24
1798
+ )
1799
+ 28.15
1800
+ %
1801
+
1802
+ 52.88
1803
+ %
1804
+
1805
+ 25.51
1806
+ %
1807
+
1808
+ 43.07
1809
+ %
1810
+
1811
+ 39.10
1812
+ %
1813
+ Model Communication.
1814
+
1815
+ Model communication Wu et al. (2023); Li et al. (2024); Liang et al. (2023) is a framework that enables the development of LLM applications through the use of multiple conversable agents that collaborate to complete tasks. This approach allows developers to design complex LLM application workflows as multi-agent conversations, where agents with various roles and capabilities, driven by LLMs, tools, or human inputs, interact with each other. Unlike model merging, mixture, and stacking techniques, LLM communication is orthogonal to the primary focus of this paper because it does not modify the model weights; instead, it leverages the in-context learning and conversational capabilities of LLMs to coordinate agents. An empirical comparison with this class of methods is beyond the scope of this study and will be explored in future research.
1816
+
1817
+ 7Limitations
1818
+
1819
+ For LLM scaling studies, while empirical evidence suggests that increasing model size, data volume, and computational complexity leads to better performance, there is little theoretical clarity on the exact mechanisms behind these improvements. Second, although scaling laws suggest that performance continues to improve as models get larger, recent evidence indicates that scaling may lead to diminishing returns beyond a certain point. In addition, our work focuses on benchmarking results, while the reasons why model merging improves performance could be further enhanced by post hoc analysis, such as examining parameter distribution and similarity during model operations.
1820
+
1821
+ 8Conclusion
1822
+
1823
+ In this paper, we explore the scaling LLM based on a model zoo of pre-trained LLMs within the real world. We first benchmark state-of-the-art LLM merging, mixture, and model stacking. Based on previous findings, we then propose a novel LLM scaling framework, Model-GLUE. Specifically, we scale up the model zoo closely examine the existing model merging techniques, and conclude the selective merging techniques based on heuristics and learnable algorithms. Further, we investigate variants of Mixture-of-Experts for combining LLMs and suggest it can serve as an alternative to merging failure cases. Finally, we integrate selective merging strategies with model mixture techniques, presenting this as a comprehensive solution for scaling a diverse array of LLM collections. Future works will include model stacking and communication to our Model-GLUE framework.
1824
+
1825
+ References
1826
+ Agarwal et��al. [2024]
1827
+
1828
+ Naman Agarwal, Pranjal Awasthi, Satyen Kale, and Eric Zhao.Stacking as accelerated gradient descent.arXiv preprint arXiv:2403.04978, 2024.
1829
+ Ainsworth et al. [2022]
1830
+
1831
+ Samuel K Ainsworth, Jonathan Hayase, and Siddhartha Srinivasa.Git re-basin: Merging models modulo permutation symmetries.arXiv preprint arXiv:2209.04836, 2022.
1832
+ Akiba et al. [2019]
1833
+
1834
+ Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama.Optuna: A next-generation hyperparameter optimization framework.In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2623–2631, 2019.
1835
+ Akiba et al. [2024]
1836
+
1837
+ Takuya Akiba, Makoto Shing, Yujin Tang, Qi Sun, and David Ha.Evolutionary optimization of model merging recipes, 2024.
1838
+ Anand et al. [2023]
1839
+
1840
+ Yuvanesh Anand, Zach Nussbaum, Brandon Duderstadt, Benjamin M. Schmidt, Adam Treat, and Andriy Mulyar.Gpt4all-j: An apache-2 licensed assistant-style chatbot, 2023.
1841
+ Austin et al. [2021]
1842
+
1843
+ Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le, and Charles Sutton.Program synthesis with large language models.ArXiv, abs/2108.07732, 2021.
1844
+ Azerbayev et al. [2023]
1845
+
1846
+ Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q Jiang, Jia Deng, Stella Biderman, and Sean Welleck.Llemma: An open language model for mathematics.arXiv preprint arXiv:2310.10631, 2023.
1847
+ Bekbayev et al. [2023]
1848
+
1849
+ Aibek Bekbayev, Sungbae Chun, Yerzat Dulat, and James Yamazaki.The poison of alignment, 2023.
1850
+ Chen et al. [2021]
1851
+
1852
+ Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, David W. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, Igor Babuschkin, Suchir Balaji, Shantanu Jain, Andrew Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew M. Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba.Evaluating large language models trained on code.ArXiv, abs/2107.03374, 2021.
1853
+ Clark et al. [2018]
1854
+
1855
+ Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord.Think you have solved question answering? try arc, the ai2 reasoning challenge.ArXiv, abs/1803.05457, 2018.
1856
+ Cobbe et al. [2021]
1857
+
1858
+ Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman.Training verifiers to solve math word problems.ArXiv, abs/2110.14168, 2021.
1859
+ Dai et al. [2024]
1860
+
1861
+ Damai Dai, Chengqi Deng, Chenggang Zhao, R. X. Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Y. Wu, Zhenda Xie, Y. K. Li, Panpan Huang, Fuli Luo, Chong Ruan, Zhifang Sui, and Wenfeng Liang.Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models, 2024.
1862
+ Ding et al. [2024]
1863
+
1864
+ Ning Ding, Yulin Chen, Ganqu Cui, Xingtai Lv, Weilin Zhao, Ruobing Xie, Bowen Zhou, Zhiyuan Liu, and Maosong Sun.Mastering text, code and math simultaneously via fusing highly specialized language models, 2024.
1865
+ Dodge et al. [2022]
1866
+
1867
+ Jesse Dodge, Taylor Prewitt, Rémi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, and Will Buchanan.Measuring the carbon intensity of ai in cloud instances.Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022.
1868
+ Fedus et al. [2022]
1869
+
1870
+ William Fedus, Barret Zoph, and Noam Shazeer.Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.J. Mach. Learn. Res., 23:120:1–120:39, 2022.
1871
+ Goddard et al. [2024]
1872
+
1873
+ Charles Goddard, Shamane Siriwardhana, Malikeh Ehghaghi, Luke Meyers, Vlad Karpukhin, Brian Benedict, Mark McQuade, and Jacob Solawetz.Arcee’s mergekit: A toolkit for merging large language models.arXiv preprint arXiv:2403.13257, 2024.
1874
+ Gong et al. [2019]
1875
+
1876
+ Linyuan Gong, Di He, Zhuohan Li, Tao Qin, Liwei Wang, and Tieyan Liu.Efficient training of bert by progressively stacking.In International conference on machine learning, pages 2337–2346. PMLR, 2019.
1877
+ Gu et al. [2020]
1878
+
1879
+ Xiaotao Gu, Liyuan Liu, Hongkun Yu, Jing Li, Chen Chen, and Jiawei Han.On the transformer growth for progressive bert training.arXiv preprint arXiv:2010.12562, 2020.
1880
+ Hansen [2006]
1881
+
1882
+ Nikolaus Hansen.The cma evolution strategy: a comparing review.Towards a new evolutionary computation: Advances in the estimation of distribution algorithms, pages 75–102, 2006.
1883
+ Hendrycks et al. [2020]
1884
+
1885
+ Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Xiaodong Song, and Jacob Steinhardt.Measuring massive multitask language understanding.ArXiv, abs/2009.03300, 2020.
1886
+ Hu et al. [2022]
1887
+
1888
+ Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.LoRA: Low-rank adaptation of large language models.In International Conference on Learning Representations, 2022.URL https://openreview.net/forum?id=nZeVKeeFYf9.
1889
+ Ilharco et al. [2023]
1890
+
1891
+ Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi.Editing models with task arithmetic, 2023.
1892
+ Imfeld et al. [2023]
1893
+
1894
+ Moritz Imfeld, Jacopo Graldi, Marco Giordano, Thomas Hofmann, Sotiris Anagnostidis, and Sidak Pal Singh.Transformer fusion with optimal transport.arXiv preprint arXiv:2310.05719, 2023.
1895
+ Jang et al. [2024]
1896
+
1897
+ Dong-Hwan Jang, Sangdoo Yun, and Dongyoon Han.Model stock: All we need is just a few fine-tuned models, 2024.
1898
+ Jiang et al. [2024]
1899
+
1900
+ Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed.Mixtral of experts, 2024.
1901
+ Jin et al. [2023]
1902
+
1903
+ Xisen Jin, Xiang Ren, Daniel Preotiuc-Pietro, and Pengxiang Cheng.Dataless knowledge fusion by merging weights of language models, 2023.
1904
+ Kaplan et al. [2020]
1905
+
1906
+ Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeff Wu, and Dario Amodei.Scaling laws for neural language models.ArXiv, abs/2001.08361, 2020.
1907
+ Kim et al. [2023]
1908
+
1909
+ Dahyun Kim, Chanjun Park, Sanghoon Kim, Wonsung Lee, Wonho Song, Yunsu Kim, Hyeonwoo Kim, Yungi Kim, Hyeonju Lee, Jihoo Kim, et al.Solar 10.7 b: Scaling large language models with simple yet effective depth up-scaling.arXiv preprint arXiv:2312.15166, 2023.
1910
+ Lee et al. [2023]
1911
+
1912
+ Ariel N. Lee, Cole J. Hunter, and Nataniel Ruiz.Platypus: Quick, cheap, and powerful refinement of llms.2023.
1913
+ Lepikhin et al. [2020]
1914
+
1915
+ Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam M. Shazeer, and Z. Chen.Gshard: Scaling giant models with conditional computation and automatic sharding.ArXiv, abs/2006.16668, 2020.
1916
+ Li et al. [2024]
1917
+
1918
+ Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem.Camel: Communicative agents for" mind" exploration of large language model society.Advances in Neural Information Processing Systems, 36, 2024.
1919
+ Liang et al. [2022]
1920
+
1921
+ Hanxue Liang, Zhiwen Fan, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, and Zhangyang Wang.M3vit: Mixture-of-experts vision transformer for efficient multi-task learning with model-accelerator co-design.ArXiv, abs/2210.14793, 2022.
1922
+ Liang et al. [2023]
1923
+
1924
+ Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi.Encouraging divergent thinking in large language models through multi-agent debate.arXiv preprint arXiv:2305.19118, 2023.
1925
+ Liu et al. [2023]
1926
+
1927
+ Zhengzhong Liu, Aurick Qiao, Willie Neiswanger, Hongyi Wang, Bowen Tan, Tianhua Tao, Junbo Li, Yuqi Wang, Suqi Sun, Omkar Pangarkar, Richard Fan, Yi Gu, Victor Miller, Yonghao Zhuang, Guowei He, Haonan Li, Fajri Koto, Liping Tang, Nikhil Ranjan, Zhiqiang Shen, Xuguang Ren, Roberto Iriondo, Cun Mu, Zhiting Hu, Mark Schulze, Preslav Nakov, Tim Baldwin, and Eric P. Xing.Llm360: Towards fully transparent open-source llms, 2023.
1928
+ Matena and Raffel [2022]
1929
+
1930
+ Michael Matena and Colin Raffel.Merging models with fisher-weighted averaging, 2022.
1931
+ Muennighoff et al. [2023]
1932
+
1933
+ Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, and Shayne Longpre.Octopack: Instruction tuning code large language models.arXiv preprint arXiv:2308.07124, 2023.
1934
+ Mukherjee et al. [2023]
1935
+
1936
+ Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah.Orca: Progressive learning from complex explanation traces of gpt-4, 2023.
1937
+ Nagarajan and Kolter [2021]
1938
+
1939
+ Vaishnavh Nagarajan and J. Zico Kolter.Uniform convergence may be unable to explain generalization in deep learning, 2021.
1940
+ OpenAI [2023]
1941
+
1942
+ OpenAI.GPT-4 technical report.volume abs/2303.08774, 2023.
1943
+ Polo et al. [2024]
1944
+
1945
+ Felipe Maia Polo, Lucas Weber, Leshem Choshen, Yuekai Sun, Gongjun Xu, and Mikhail Yurochkin.tinybenchmarks: evaluating llms with fewer examples.ArXiv, abs/2402.14992, 2024.
1946
+ Rajbhandari et al. [2022]
1947
+
1948
+ Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, and Yuxiong He.Deepspeed-moe: Advancing mixture-of-experts inference and training to power next-generation ai scale, 2022.
1949
+ Reddi et al. [2023]
1950
+
1951
+ Sashank J Reddi, Sobhan Miryoosefi, Stefani Karp, Shankar Krishnan, Satyen Kale, Seungyeon Kim, and Sanjiv Kumar.Efficient training of language models using few-shot learning.In International Conference on Machine Learning, pages 14553–14568. PMLR, 2023.
1952
+ Rozière et al. [2023]
1953
+
1954
+ Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, I. Evtimov, Joanna Bitton, Manish P Bhatt, Cristian Cantón Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre D’efossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve.Code llama: Open foundation models for code.ArXiv, abs/2308.12950, 2023.
1955
+ Rozière et al. [2024]
1956
+
1957
+ Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve.Code llama: Open foundation models for code, 2024.
1958
+ Sakaguchi et al. [2019]
1959
+
1960
+ Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi.An adversarial winograd schema challenge at scale.2019.
1961
+ Semnani et al. [2023]
1962
+
1963
+ Sina Semnani, Violet Yao, Heidi Zhang, and Monica Lam.WikiChat: Stopping the hallucination of large language model chatbots by few-shot grounding on Wikipedia.In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2387–2413, Singapore, December 2023. Association for Computational Linguistics.doi: 10.18653/v1/2023.findings-emnlp.157.URL https://aclanthology.org/2023.findings-emnlp.157.
1964
+ Shazeer et al. [2017]
1965
+
1966
+ Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean.Outrageously large neural networks: The sparsely-gated mixture-of-experts layer, 2017.
1967
+ Shen et al. [2023]
1968
+
1969
+ Yikang Shen, Zheyu Zhang, Tianyou Cao, Shawn Tan, Zhenfang Chen, and Chuang Gan.Moduleformer: Learning modular large language models from uncurated data.ArXiv, abs/2306.04640, 2023.
1970
+ Shoemake [1985]
1971
+
1972
+ Ken Shoemake.Animating rotation with quaternion curves.In Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’85, page 245–254, New York, NY, USA, 1985. Association for Computing Machinery.ISBN 0897911660.
1973
+ Sukhbaatar et al. [2024]
1974
+
1975
+ Sainbayar Sukhbaatar, Olga Golovneva, Vasu Sharma, Hu Xu, Xi Victoria Lin, Baptiste Rozière, Jacob Kahn, Daniel Li, Wen tau Yih, Jason Weston, and Xian Li.Branch-train-mix: Mixing expert llms into a mixture-of-experts llm, 2024.
1976
+ Tissera [2023]
1977
+
1978
+ Migel Tissera.Synthia-70b-v1.2b: Synthetic intelligent agent.https://huggingface.co/migtissera/Synthia-13B, 2023.
1979
+ Touvron et al. [2023]
1980
+
1981
+ Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cristian Cantón Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom.Llama 2: Open foundation and fine-tuned chat models.ArXiv, abs/2307.09288, 2023.
1982
+ Verma and Elbayad [2024]
1983
+
1984
+ Neha Verma and Maha Elbayad.Merging text transformer models from different initializations.arXiv preprint arXiv:2403.00986, 2024.
1985
+ Wan et al. [2024]
1986
+
1987
+ Fanqi Wan, Xinting Huang, Deng Cai, Xiaojun Quan, Wei Bi, and Shuming Shi.Knowledge fusion of large language models.In The Twelfth International Conference on Learning Representations, 2024.
1988
+ Wang et al. [2023a]
1989
+
1990
+ Hongyi Wang, Felipe Maia Polo, Yuekai Sun, Souvik Kundu, Eric Xing, and Mikhail Yurochkin.Fusing models with complementary expertise, 2023a.
1991
+ Wang et al. [2023b]
1992
+
1993
+ Peihao Wang, Rameswar Panda, Lucas Torroba Hennigen, Philip Greengard, Leonid Karlinsky, Rogerio Feris, David Daniel Cox, Zhangyang Wang, and Yoon Kim.Learning to grow pretrained models for efficient transformer training.arXiv preprint arXiv:2303.00980, 2023b.
1994
+ Wang et al. [2023c]
1995
+
1996
+ Peihao Wang, Rameswar Panda, and Zhangyang Wang.Data efficient neural scaling law via model reusing.In International Conference on Machine Learning, pages 36193–36204. PMLR, 2023c.
1997
+ Wei et al. [2023]
1998
+
1999
+ Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang.Magicoder: Source code is all you need, 2023.
2000
+ Wortsman et al. [2022]
2001
+
2002
+ Mitchell Wortsman, Gabriel Ilharco, Samir Yitzhak Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S. Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, and Ludwig Schmidt.Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time, 2022.
2003
+ Wu et al. [2024]
2004
+
2005
+ Chengyue Wu, Yukang Gan, Yixiao Ge, Zeyu Lu, Jiahao Wang, Ye Feng, Ping Luo, and Ying Shan.Llama pro: Progressive llama with block expansion.arXiv preprint arXiv:2401.02415, 2024.
2006
+ Wu et al. [2023]
2007
+
2008
+ Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang, and Chi Wang.Autogen: Enabling next-gen llm applications via multi-agent conversation framework.arXiv preprint arXiv:2308.08155, 2023.
2009
+ Xu et al. [2024]
2010
+
2011
+ Zhengqi Xu, Ke Yuan, Huiqiong Wang, Yong Wang, Mingli Song, and Jie Song.Training-free pretrained model merging.arXiv preprint arXiv:2403.01753, 2024.
2012
+ Yadav et al. [2023]
2013
+
2014
+ Prateek Yadav, Derek Tam, Leshem Choshen, Colin Raffel, and Mohit Bansal.Ties-merging: Resolving interference when merging models, 2023.
2015
+ Yu et al. [2024]
2016
+
2017
+ Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li.Language models are super mario: Absorbing abilities from homologous models as a free lunch, 2024.
2018
+ Yu et al. [2023]
2019
+
2020
+ Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu.Metamath: Bootstrap your own mathematical questions for large language models.arXiv preprint arXiv:2309.12284, 2023.
2021
+ Zheng et al. [2023]
2022
+
2023
+ Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica.Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
2024
+ Zheng et al. [2024]
2025
+
2026
+ Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al.Judging llm-as-a-judge with mt-bench and chatbot arena.Advances in Neural Information Processing Systems, 36, 2024.
2027
+ Zoph et al. [2022]
2028
+
2029
+ Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam M. Shazeer, and William Fedus.Designing effective sparse expert models.2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pages 1044–1044, 2022.
2030
+ Checklist
2031
+ 1.
2032
+
2033
+ For all authors…
2034
+
2035
+ (a)
2036
+
2037
+ Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] Our abstract and introduction in Section 1 accurately reflect the paper’s contributions and scope?.
2038
+
2039
+ (b)
2040
+
2041
+ Did you describe the limitations of your work? [Yes] In Section 6 and 8 we discuss two other LLM scaling methods for future work and present some preliminary results. In Appendix Appendix we discuss its limitations.
2042
+
2043
+ (c)
2044
+
2045
+ Did you discuss any potential negative societal impacts of your work? [N/A] Our work focuses on foundational research and is not directly related to societal impacts.
2046
+
2047
+ (d)
2048
+
2049
+ Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] We have read and follow the ethics review guidelines.
2050
+
2051
+ 2.
2052
+
2053
+ If you are including theoretical results…
2054
+
2055
+ (a)
2056
+
2057
+ Did you state the full set of assumptions of all theoretical results? [N/A] Our work does not include theoretical results.
2058
+
2059
+ (b)
2060
+
2061
+ Did you include complete proofs of all theoretical results? [N/A] Our work does not include theoretical results.
2062
+
2063
+ 3.
2064
+
2065
+ If you ran experiments (e.g. for benchmarks)…
2066
+
2067
+ (a)
2068
+
2069
+ Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We have provided our code, data (the way to get them), and instructions needed for reproducing in a GitHub repository.
2070
+
2071
+ (b)
2072
+
2073
+ Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] We have provided all the training details in our Appendix.
2074
+
2075
+ (c)
2076
+
2077
+ Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A] Our main results do not contain randomness, and running multiple times with different random seeds leads to the same results.
2078
+
2079
+ (d)
2080
+
2081
+ Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] We have provided the computing resources we used in our Appendix.
2082
+
2083
+ 4.
2084
+
2085
+ If you are using existing assets (e.g., code, data, models) or curating/releasing new assets…
2086
+
2087
+ (a)
2088
+
2089
+ If your work uses existing assets, did you cite the creators? [Yes] We have cited them.
2090
+
2091
+ (b)
2092
+
2093
+ Did you mention the license of the assets? [Yes] We have mentioned the license of all the datasets we used in Appendix Appendix.
2094
+
2095
+ (c)
2096
+
2097
+ Did you include any new assets either in the supplemental material or as a URL? [N/A] We do not release any new assets.
2098
+
2099
+ (d)
2100
+
2101
+ Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] We only use public available data in this work.
2102
+
2103
+ (e)
2104
+
2105
+ Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] The data we are using/curating does not contain any of these.
2106
+
2107
+ 5.
2108
+
2109
+ If you used crowdsourcing or conducted research with human subjects…
2110
+
2111
+ (a)
2112
+
2113
+ Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] We did not use any crowdsourcing or conduct research with human subjects.
2114
+
2115
+ (b)
2116
+
2117
+ Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] We did not use any crowdsourcing or conduct research with human subjects.
2118
+
2119
+ (c)
2120
+
2121
+ Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A] We did not use any crowdsourcing or conduct research with human subjects.
2122
+
2123
+ Appendix
2124
+ Appendix AImplementation Details
2125
+ A.1Detailed Algorithms of Heuristic Strategy of Model Merging
2126
+ Heuristic (Average).
2127
+
2128
+ We present the implementation details in Algorithm 1. The algorithm takes a mergable model family as input and generate a merged model as output. For each candidate model in input model family, we compute the accuracy of the temporary merged model, generated by the union of this candidate model and the previously selected model, on the proxy dataset, and the candidate that brings no harm to the accuracy will be selected for the final merged model. Each weight of the merged model is generated by averaging the corresponding weights of all the selected models.
2129
+
2130
+ Algorithm 1 Heuristic (Average)
2131
+ 1:A mergable family
2132
+ {
2133
+ 𝑤
2134
+ 1
2135
+ ,
2136
+
2137
+ ,
2138
+ 𝑤
2139
+ 𝑛
2140
+ }
2141
+ (sorted in decreasing order of Acc(
2142
+ 𝑤
2143
+ 𝑖
2144
+ )).
2145
+ 2:merged_model
2146
+ 3:models_to_merge
2147
+
2148
+ {
2149
+ 𝑤
2150
+ 1
2151
+ }
2152
+ 4:merged_model
2153
+
2154
+ 𝑤
2155
+ 1
2156
+ 5:for 
2157
+ 𝑖
2158
+ =
2159
+ 2
2160
+ to
2161
+ 𝑛
2162
+  do
2163
+ 6:     if ProxyAcc(AvgMerge(models_to_merge
2164
+
2165
+ {
2166
+ 𝑤
2167
+ 𝑖
2168
+ }
2169
+ )
2170
+ )
2171
+
2172
+ ProxyAcc(merged_model)) then
2173
+ 7:         models_to_merge
2174
+
2175
+ models_to_merge
2176
+
2177
+ {
2178
+ 𝑤
2179
+ 𝑖
2180
+ }
2181
+ 8:         merged_model = AvgMerge(models_to_merge)
2182
+ 9:     end if
2183
+ 10:end for
2184
+ 11:return merged_model
2185
+ Heuristic (Coefficient).
2186
+
2187
+ We present the implementation details in Algorithm 2. Heuristic (Coefficient) builds upon Heuristic (Average) by combining the previously merged model with a new candidate using different coefficients in each round. To reduce the search space, we set the range of coefficient as 0.1, 0.2…0.9.
2188
+
2189
+ Algorithm 2 Heuristic (Coefficient)
2190
+ 1:A mergable family
2191
+ {
2192
+ 𝑤
2193
+ 1
2194
+ ,
2195
+
2196
+ ,
2197
+ 𝑤
2198
+ 𝑛
2199
+ }
2200
+ (sorted in decreasing order of Acc(
2201
+ 𝑤
2202
+ 𝑖
2203
+ )), a list of coefficients
2204
+ {
2205
+ 0.1
2206
+ ,
2207
+ 0.2
2208
+
2209
+
2210
+ ,
2211
+ 0.9
2212
+ }
2213
+ to be searched when merging.
2214
+ 2:merged_model
2215
+ 3:coefficients
2216
+
2217
+ {
2218
+ 0.1
2219
+ ,
2220
+ 0.2
2221
+
2222
+
2223
+ ,
2224
+ 0.9
2225
+ }
2226
+ 4:merged_model
2227
+
2228
+ 𝑤
2229
+ 1
2230
+ 5:for 
2231
+ 𝑖
2232
+ =
2233
+ 2
2234
+ to
2235
+ 𝑛
2236
+  do
2237
+ 6:     best_acc, best_c
2238
+
2239
+ ProxyAcc(merged_model),
2240
+ 1.0
2241
+ 7:     for 
2242
+ 𝑐
2243
+ in coefficients do
2244
+ 8:         if ProxyAcc(Merge(
2245
+ 𝑐
2246
+ , merged_model,
2247
+ 𝑤
2248
+ 𝑖
2249
+ ))
2250
+
2251
+ best_acc then
2252
+ 9:              best_acc, best_c
2253
+
2254
+ ProxyAcc(Merge(
2255
+ 𝑐
2256
+ , merged_model,
2257
+ 𝑤
2258
+ 𝑖
2259
+ )) ,
2260
+ 𝑐
2261
+ 10:         end if
2262
+ 11:     end for
2263
+ 12:     merged_model
2264
+
2265
+ Merge(best_c, merged_model,
2266
+ 𝑤
2267
+ 𝑖
2268
+ )
2269
+ 13:end for
2270
+ 14:return merged_model
2271
+ Heuristic (Similarity).
2272
+
2273
+ We present the implementation details in Algorithm 3. We use the average similarity of all weights as the criterion for selecting models in each round. This algorithm selects the candidate model with the highest or lowest similarity and conducts a conefficient search to combine it with the previously merged model.
2274
+
2275
+ Algorithm 3 Heuristic (Similarity)
2276
+ 1:A mergable family
2277
+ {
2278
+ 𝑤
2279
+ 1
2280
+ ,
2281
+
2282
+ ,
2283
+ 𝑤
2284
+ 𝑛
2285
+ }
2286
+ (sorted in decreasing order of Acc(
2287
+ 𝑤
2288
+ 𝑖
2289
+ )), a list of coefficients
2290
+ {
2291
+ 0.1
2292
+ ,
2293
+ 0.2
2294
+
2295
+
2296
+ ,
2297
+ 0.9
2298
+ }
2299
+ to be searched when merging.
2300
+ 2:merged_model
2301
+ 3:merged_model
2302
+
2303
+ 𝑤
2304
+ 1
2305
+ 4:remaining_models
2306
+
2307
+ {
2308
+ 𝑤
2309
+ 2
2310
+ ,
2311
+
2312
+ ,
2313
+ 𝑤
2314
+ 𝑛
2315
+ }
2316
+ 5:for 
2317
+ 𝑖
2318
+ =
2319
+ 2
2320
+ to
2321
+ 𝑛
2322
+  do
2323
+ 6:     best_acc, best_c
2324
+
2325
+ ProxyAcc(merged_model),
2326
+ 1.0
2327
+ 7:     candidate_model
2328
+
2329
+ GetModelBySimilarity(merged_model, remaining_models)
2330
+ 8:     for 
2331
+ 𝑐
2332
+ in coefficients do
2333
+ 9:         if ProxyAcc(Merge(
2334
+ 𝑐
2335
+ , merged_model, candidate_model))
2336
+
2337
+ best_acc then
2338
+ 10:              best_acc, best_c
2339
+
2340
+ ProxyAcc(Merge(
2341
+ 𝑐
2342
+ , merged_model, candidate_model)) ,
2343
+ 𝑐
2344
+ 11:         end if
2345
+ 12:     end for
2346
+ 13:     merged_model
2347
+
2348
+ Merge(best_c, merged_model, candidate_model)
2349
+ 14:     remaining_models
2350
+
2351
+ remaining_models
2352
+
2353
+ {
2354
+ candidate_model
2355
+ }
2356
+ 15:end for
2357
+ 16:return merged_model
2358
+ A.2Detailed about Evolutionary Strategy of Model Merging
2359
+
2360
+ For the experiments of Q2 - (i) in Section 4.3, we constrain all parameter values to be within the range of
2361
+ [
2362
+ 0
2363
+ ,
2364
+ 1
2365
+ ]
2366
+ . TIES and DARE require to optimize
2367
+ 2
2368
+
2369
+ 𝑘
2370
+ parameters, while other methods require to optimize
2371
+ 𝑘
2372
+ parameters, where
2373
+ 𝑘
2374
+ represents the number of models included in the model zoo.
2375
+
2376
+ For the experiments of Q2 - (ii) in Section 4.3, we choose the Linear method for experimentation, and we constrain all parameter values to be within the range of
2377
+ [
2378
+ 0
2379
+ ,
2380
+ 1
2381
+ ]
2382
+ . For finer-grained merging, we group adjacent
2383
+ 𝑛
2384
+ decoder layers together, where they share the same coefficient. For the remaining parameters, we make them share the same coefficient. Hence, the number of parameters that need to be fine-tuned is given by:
2385
+ 𝑘
2386
+
2387
+ (
2388
+ num_hidden_layers
2389
+ 𝑛
2390
+ +
2391
+ 1
2392
+ )
2393
+ , where
2394
+ 𝑘
2395
+ represents the number of models and
2396
+ 𝑛
2397
+ represents the size of groups. For the case of
2398
+ 𝑛
2399
+ =
2400
+ 32
2401
+ , we utilized the previous results, thus the number of parameters to be optimized is
2402
+ 𝑘
2403
+ .
2404
+
2405
+ For the experiments of Q2 - (iii) in Section 4.3, we control the variation of coefficients obtained through heuristic strategy to not exceed
2406
+ 0.1
2407
+ , and when it is negative, we set it to
2408
+ 0
2409
+ . We also only evaluate the Linear method.
2410
+
2411
+ A.3Detailed Algorithms of Model Mixture
2412
+ Model Level Mixture.
2413
+
2414
+ We present the implementation details in Algorithm 4. The mixed model consists of a router, which determines the expert to execute inference, and all the input models as experts. All the weights of input model’s components, including embedding layers (embd_layer), decoder layers (layers) and language model head (lm_head), will be integrated into the mixed model.
2415
+
2416
+ Algorithm 4 Model Level Mixture
2417
+ 1:A model family
2418
+ {
2419
+ 𝑤
2420
+ 1
2421
+ ,
2422
+
2423
+ ,
2424
+ 𝑤
2425
+ 𝑛
2426
+ }
2427
+ 2:mixed_model
2428
+ 3:mixed_model.router
2429
+
2430
+ GenerateRouter(
2431
+ {
2432
+ 𝑤
2433
+ 1
2434
+ ,
2435
+
2436
+ ,
2437
+ 𝑤
2438
+ 𝑛
2439
+ }
2440
+ )
2441
+ 4:for 
2442
+ 𝑖
2443
+ =
2444
+ 1
2445
+ to
2446
+ 𝑛
2447
+  do
2448
+ 5:     mixed_model.
2449
+ 𝑒
2450
+
2451
+ 𝑥
2452
+
2453
+ 𝑝
2454
+
2455
+ 𝑒
2456
+
2457
+ 𝑟
2458
+
2459
+ 𝑡
2460
+ 𝑖
2461
+
2462
+
2463
+
2464
+ 𝑤
2465
+ 𝑖
2466
+ 6:end for
2467
+ 7:return mixed_model
2468
+ Block Level Mixture.
2469
+
2470
+ We present the implementation details in Algorithm 5. Different from model-level mixture, block-level mixture utilizes the embd_layer and lm_head of an additional model within a model family to handle input and output. Meanwhile, the transformer blocks of other models within the model family act as experts, connected by a router.
2471
+
2472
+ Algorithm 5 Block Level Mixture
2473
+ 1:A model family
2474
+ {
2475
+ 𝑤
2476
+ 1
2477
+ ,
2478
+
2479
+ ,
2480
+ 𝑤
2481
+ 𝑛
2482
+ }
2483
+ with identical layer amount, one of the family as
2484
+ 𝑏
2485
+
2486
+ 𝑎
2487
+
2488
+ 𝑠
2489
+
2490
+ 𝑒
2491
+
2492
+ _
2493
+
2494
+ 𝑚
2495
+
2496
+ 𝑜
2497
+
2498
+ 𝑑
2499
+
2500
+ 𝑒
2501
+
2502
+ 𝑙
2503
+ 2:mixed_model
2504
+ 3:mixed_model.embd_layer
2505
+
2506
+ base_model.embd_layer
2507
+ 4:mixed_model.lm_head
2508
+
2509
+ base_model.lm _head
2510
+ 5:for 
2511
+ 𝑖
2512
+ =
2513
+ 0
2514
+ to Len(base_model.layers) do
2515
+ 6:     mixed_model.
2516
+ 𝑙
2517
+
2518
+ 𝑎
2519
+
2520
+ 𝑦
2521
+
2522
+ 𝑒
2523
+
2524
+ 𝑟
2525
+ 𝑖
2526
+ .router
2527
+
2528
+ GenerateRouter(
2529
+ {
2530
+ 𝑤
2531
+ 1
2532
+ ,
2533
+
2534
+ ,
2535
+ 𝑤
2536
+ 𝑛
2537
+ }
2538
+ )
2539
+ 7:     for 
2540
+ 𝑗
2541
+ =
2542
+ 1
2543
+ to
2544
+ 𝑛
2545
+  do
2546
+ 8:         mixed_model.
2547
+ 𝑙
2548
+
2549
+ 𝑎
2550
+
2551
+ 𝑦
2552
+
2553
+ 𝑒
2554
+
2555
+ 𝑟
2556
+ 𝑖
2557
+ .
2558
+ 𝑒
2559
+
2560
+ 𝑥
2561
+
2562
+ 𝑝
2563
+
2564
+ 𝑒
2565
+
2566
+ 𝑟
2567
+
2568
+ 𝑡
2569
+ 𝑗
2570
+
2571
+
2572
+
2573
+ 𝑤
2574
+ 𝑗
2575
+ .
2576
+ 𝑙
2577
+
2578
+ 𝑎
2579
+
2580
+ 𝑦
2581
+
2582
+ 𝑒
2583
+
2584
+ 𝑟
2585
+ 𝑖
2586
+ 9:     end for
2587
+ 10:end for
2588
+ 11:return mixed_model
2589
+ FFN Level Mixture.
2590
+
2591
+ We present the implementation details in Algorithm 6. FFN level mixture is similar to block level with only difference on inner-block component sharing. Each layer of the mixed model will take the attention weights of the base model and build an MoE structure based on the FFNs in corresponding layers of all the input models.
2592
+
2593
+ Algorithm 6 FFN Level Mixture
2594
+ 1:A model family
2595
+ {
2596
+ 𝑤
2597
+ 1
2598
+ ,
2599
+
2600
+ ,
2601
+ 𝑤
2602
+ 𝑛
2603
+ }
2604
+ with identical layer amount, one of the family as
2605
+ 𝑏
2606
+
2607
+ 𝑎
2608
+
2609
+ 𝑠
2610
+
2611
+ 𝑒
2612
+
2613
+ _
2614
+
2615
+ 𝑚
2616
+
2617
+ 𝑜
2618
+
2619
+ 𝑑
2620
+
2621
+ 𝑒
2622
+
2623
+ 𝑙
2624
+ .
2625
+ 2:mixed_model,
2626
+ 3:mixed_model.embd_layer
2627
+
2628
+ base_model.embd_layer
2629
+ 4:mixed_model.lm_head
2630
+
2631
+ base_model.lm_head
2632
+ 5:for 
2633
+ 𝑖
2634
+ =
2635
+ 0
2636
+ to Len(base_model.layers) do
2637
+ 6:     mixed_model.
2638
+ 𝑙
2639
+
2640
+ 𝑎
2641
+
2642
+ 𝑦
2643
+
2644
+ 𝑒
2645
+
2646
+ 𝑟
2647
+ 𝑖
2648
+ .router
2649
+
2650
+ GenerateRouter(
2651
+ {
2652
+ 𝑤
2653
+ 1
2654
+ ,
2655
+
2656
+ ,
2657
+ 𝑤
2658
+ 𝑛
2659
+ }
2660
+ )
2661
+ 7:     mixed_model.
2662
+ 𝑙
2663
+
2664
+ 𝑎
2665
+
2666
+ 𝑦
2667
+
2668
+ 𝑒
2669
+
2670
+ 𝑟
2671
+ 𝑖
2672
+ .attention
2673
+
2674
+ base_model.
2675
+ 𝑙
2676
+
2677
+ 𝑎
2678
+
2679
+ 𝑦
2680
+
2681
+ 𝑒
2682
+
2683
+ 𝑟
2684
+ 𝑖
2685
+ .attention
2686
+ 8:     mixed_model.
2687
+ 𝑙
2688
+
2689
+ 𝑎
2690
+
2691
+ 𝑦
2692
+
2693
+ 𝑒
2694
+
2695
+ 𝑟
2696
+ 𝑖
2697
+ .norm
2698
+
2699
+ base_model.
2700
+ 𝑙
2701
+
2702
+ 𝑎
2703
+
2704
+ 𝑦
2705
+
2706
+ 𝑒
2707
+
2708
+ 𝑟
2709
+ 𝑖
2710
+ .norm
2711
+ 9:     for 
2712
+ 𝑗
2713
+ =
2714
+ 1
2715
+ to
2716
+ ���
2717
+  do
2718
+ 10:         mixed_model.
2719
+ 𝑙
2720
+
2721
+ 𝑎
2722
+
2723
+ 𝑦
2724
+
2725
+ 𝑒
2726
+
2727
+ 𝑟
2728
+ 𝑖
2729
+ .
2730
+ 𝑒
2731
+
2732
+ 𝑥
2733
+
2734
+ 𝑝
2735
+
2736
+ 𝑒
2737
+
2738
+ 𝑟
2739
+
2740
+ 𝑡
2741
+ 𝑗
2742
+
2743
+
2744
+
2745
+ 𝑤
2746
+ 𝑗
2747
+ .
2748
+ 𝑙
2749
+
2750
+ 𝑎
2751
+
2752
+ 𝑦
2753
+
2754
+ 𝑒
2755
+
2756
+ 𝑟
2757
+ 𝑖
2758
+ .FFN
2759
+ 11:     end for
2760
+ 12:end for
2761
+ 13:return mixed_model
2762
+ Hybrid Mixture
2763
+
2764
+ We present the implementation details in Algorithm 7. The hybrid mixture combines both merging and mixture methods. Specifically, the first
2765
+ 𝑘
2766
+ layers of the mixed model are obtained by merging multiple models, while the rest of the layers use an FFN-level mixture architecture.
2767
+
2768
+ Algorithm 7 Hybrid Mixture
2769
+ 1:A model family
2770
+ {
2771
+ 𝑤
2772
+ 1
2773
+ ,
2774
+
2775
+ ,
2776
+ 𝑤
2777
+ 𝑛
2778
+ }
2779
+ with identical layer amount, one of the family as
2780
+ 𝑏
2781
+
2782
+ 𝑎
2783
+
2784
+ 𝑠
2785
+
2786
+ 𝑒
2787
+
2788
+ _
2789
+
2790
+ 𝑚
2791
+
2792
+ 𝑜
2793
+
2794
+ 𝑑
2795
+
2796
+ 𝑒
2797
+
2798
+ 𝑙
2799
+ ,
2800
+ 𝑘
2801
+ layers for merging and the rest layers for mixture.
2802
+ 2:mixed_model
2803
+ 3:mixed_model.embd_layer
2804
+
2805
+ base_model.embd_layer
2806
+ 4:mixed_model.lm_head
2807
+
2808
+ base_model.lm_head
2809
+ 5:for 
2810
+ 𝑖
2811
+ =
2812
+ 0
2813
+ to
2814
+ 𝑘
2815
+  do
2816
+ 6:     mixed_model.
2817
+ 𝑙
2818
+
2819
+ 𝑎
2820
+
2821
+ 𝑦
2822
+
2823
+ 𝑒
2824
+
2825
+ 𝑟
2826
+ 𝑖
2827
+
2828
+
2829
+ Merge(
2830
+ {
2831
+ 𝑤
2832
+ 1
2833
+ ,
2834
+
2835
+ ,
2836
+ 𝑤
2837
+ 𝑛
2838
+ }
2839
+ , i)
2840
+ 7:end for
2841
+ 8:for 
2842
+ 𝑖
2843
+ =
2844
+ 𝑘
2845
+ +
2846
+ 1
2847
+ to Len(base_model.layers) do
2848
+ 9:     mixed_model.
2849
+ 𝑙
2850
+
2851
+ 𝑎
2852
+
2853
+ 𝑦
2854
+
2855
+ 𝑒
2856
+
2857
+ 𝑟
2858
+ 𝑖
2859
+ .router
2860
+
2861
+ GenerateRouter(
2862
+ {
2863
+ 𝑤
2864
+ 1
2865
+ ,
2866
+
2867
+ ,
2868
+ 𝑤
2869
+ 𝑛
2870
+ }
2871
+ )
2872
+ 10:     mixed_model.
2873
+ 𝑙
2874
+
2875
+ 𝑎
2876
+
2877
+ 𝑦
2878
+
2879
+ 𝑒
2880
+
2881
+ 𝑟
2882
+ 𝑖
2883
+ .attention
2884
+
2885
+ base_model.
2886
+ 𝑙
2887
+
2888
+ 𝑎
2889
+
2890
+ 𝑦
2891
+
2892
+ 𝑒
2893
+
2894
+ 𝑟
2895
+ 𝑖
2896
+ .attention
2897
+ 11:     mixed_model.
2898
+ 𝑙
2899
+
2900
+ 𝑎
2901
+
2902
+ 𝑦
2903
+
2904
+ 𝑒
2905
+
2906
+ 𝑟
2907
+ 𝑖
2908
+ .norm
2909
+
2910
+ base_model.
2911
+ 𝑙
2912
+
2913
+ 𝑎
2914
+
2915
+ 𝑦
2916
+
2917
+ 𝑒
2918
+
2919
+ 𝑟
2920
+ 𝑖
2921
+ .norm
2922
+ 12:     for 
2923
+ 𝑗
2924
+ =
2925
+ 1
2926
+ to
2927
+ 𝑛
2928
+  do
2929
+ 13:         mixed_model.
2930
+ 𝑙
2931
+
2932
+ 𝑎
2933
+
2934
+ 𝑦
2935
+
2936
+ 𝑒
2937
+
2938
+ 𝑟
2939
+ 𝑖
2940
+ .
2941
+ 𝑒
2942
+
2943
+ 𝑥
2944
+
2945
+ 𝑝
2946
+
2947
+ 𝑒
2948
+
2949
+ 𝑟
2950
+
2951
+ 𝑡
2952
+ 𝑗
2953
+ .FFN
2954
+
2955
+
2956
+ 𝑤
2957
+ 𝑗
2958
+ .
2959
+ 𝑙
2960
+
2961
+ 𝑎
2962
+
2963
+ 𝑦
2964
+
2965
+ 𝑒
2966
+
2967
+ 𝑟
2968
+ 𝑖
2969
+ .FFN
2970
+ 14:     end for
2971
+ 15:end for
2972
+ 16:return mixed_model
2973
+ A.4Details of Model-Glue
2974
+
2975
+ The models selected by the heuristic strategy include: migtissera/Synthia-7B-v1.2, neuralmagic/Llama-2-7b-evolcodealpaca, teknium/OpenHermes-7B, meta-llama/Llama-2-7b-chat-hf, meta-math/MetaMath-7B-V1.0, lmsys/vicuna-7b-v1.5. Since merging ise-uiuc/Magicoder-S-CL-7B and codellama/CodeLlama-7b-Instruct-hf does not lead to improvement in the Codellama’s mergeable family, we select ise-uiuc/Magicoder-S-CL-7B as the representative model.
2976
+
2977
+ The final models used for Model-level Mixture are: LLM360/CrystalChat, ise-uiuc/Magicoder-S-CL-7B, meta-math/MetaMath-Llemma-7B and the representative model of the Llama-2 family obtained through the Heuristic (Coefficient). Please refer to our repository for specific configurations.
2978
+
2979
+ A.5Details of clustering in selective merging pipeline
2980
+ Motivation for using cosine similarity as a model selection criterion
2981
+
2982
+ Previous merging study Yu et al. [2024] finds that merging performance is consistent with parameter similarity. We inherit it by using cosine similarity as a representative method to measure whether a model can be merged. From our preliminary result, cosine similarity works effectively. Empirically, when the cosine similarity between models exceeds
2983
+ 0.95
2984
+ , merging them can yield positive benefits. In Table 14, we present examples of successful and unsuccessful merging. For example, the cosine similarity between the weights of Llama-2-chat and Vicuna is
2985
+ 0.9982
2986
+ , resulting in the merged model significantly outperforming its parent models. On the other hand, the cosine similarity between the weights of Llama-2-chat and CodeLlama is
2987
+ 0.5351
2988
+ , indicating that the merged model is inferior to CodeLlama. Moreover, using cosine similarity to measure the merging benefit is simple and efficient. For these reasons, we stick with cosine similarity for selective merging pipelines.
2989
+
2990
+ Criteria for Determining the Number of Clusters.
2991
+
2992
+ We cluster models with cosine similarity greater than
2993
+ 0.95
2994
+ into a mergeable family, ensuring that within this mergeable family, the pairwise similarities between models are greater than
2995
+ 0.95
2996
+ . The number of clusters is automatically determined during the process, after which we execute our merge strategy within each cluster. For Which16 model zoo in our paper, we clustered
2997
+ 16
2998
+ models and finally obtained five mergeable families: ❶ 12 models fine-tuned based on llama-2, ❷ ise-uiuc/Magicoder-S-CL-7B, ❸ codellama/CodeLlama-7b-Instruct-hf, ❹ meta-math/MetaMath-Llemma-7B, ❺ LLM360/CrystalChat. Since the remaining clusters contain only one model each, we only report the results of different merging strategies performed within Family ❶.
2999
+
3000
+ Impact of clustering threshold
3001
+
3002
+ We computed the cosine similarity between 12 LLMs all fine-tuned from Llama-2. These models are considered to be well mergeable, having the same architecture and initialization. Since their similarities range from
3003
+ 0.9680
3004
+ to
3005
+ 0.9999
3006
+ ,
3007
+ 0.95
3008
+ could be a lower bound for model clustering. To show the impact of different clustering thresholds, we have examined the performance of merged models with drastically different similarity: Llama-2,deepseek-coder, CodeLlama, and MetaMath-Llema. We use linear interpolation to merge two models and present the benchmarking results in Table 11. The performance of the individual models is shown in Table 12. If the merged model outperforms its parent models on average accuracy, we consider it a successful merge. From Table 11, we see that successful merging only occurs between Codellama and Codellama-instruct whose weights reach
3009
+ 0.99
3010
+ similarity and have the same initialization. To include more mergeable models, we finally choose
3011
+ 0.95
3012
+ as the threshold for clustering.
3013
+
3014
+ Table 11:Performance of merged models with different similarity. Sim. stands for cosine similarity.
3015
+ Parent Model 1 Parent Model 2 ARC MMLU WinoGrande GSM8K HumanEval MBPP Avg. Sim.
3016
+ Llama-2-7b-hf deepseek-coder-6.7b-base 27.73% 24.38% 49.64% 0.00% 0.00% 0.00% 16.96% 0%
3017
+ Llama-2-7b-hf CodeLlama-7b-hf 41.04% 31.68% 66.85% 5.76% 10.98% 21.40% 29.62% 52.55%
3018
+ CodeLlama-7b-Python-hf CodeLlama-7b-hf 40.61% 37.17% 65.35% 6.67% 21.95% 25.60% 32.89% 60.34%
3019
+ MetaMath-Llemma-7B CodeLlama-7b-hf 46.16% 42.86% 64.64% 27.07% 34.76% 37.40% 42.15% 88.70%
3020
+ CodeLlama-7b-Instruct-hf CodeLlama-7b-hf 43.86% 41.39% 68.59% 16.07% 33.54% 40.80% 40.71% 99.94%
3021
+ Table 12:Performance of parent models.
3022
+ Model ARC MMLU WinoGrande GSM8K HumanEval MBPP Avg.
3023
+ Llama-2-7b-hf 53.92% 45.83% 74.11% 13.72% 10.98% 18.00% 36.09%
3024
+ deepseek-coder-6.7b-base 36.86% 36.36% 57.30% 19.03% 45.12% 54.80% 41.58%
3025
+ CodeLlama-7b-hf 41.89% 39.05% 65.98% 11.83% 32.32% 37.20% 38.05%
3026
+ CodeLlama-7b-Python-hf 40.70% 35.62% 64.56% 13.12% 38.41% 41.20% 38.94%
3027
+ MetaMath-Llemma-7B 46.67% 46.29% 64.33% 62.24% 32.32% 42.00% 48.97%
3028
+ CodeLlama-7b-Instruct-hf 43.00% 41.69% 65.90% 18.12% 33.70% 40.00% 40.40%
3029
+ A.6Energy Consumption
3030
+
3031
+ Existing literature is mainly concerned with carbon emissions during LLM pre-training Dodge et al. [2022], Touvron et al. [2023]. However, the training costs associated with the approaches evaluated in our benchmark are minimal. Specifically, the only training expenditure in our study pertains to the B-M-S router training, as described in Section 4.4. This process requires about
3032
+ 80
3033
+ GPU hours, resulting in
3034
+ 13.55
3035
+ kg CO2 emissions based on a
3036
+ 400
3037
+ W power consumption. In contrast, LLaMA-2-7B pre-training results in
3038
+ 31.22
3039
+ t CO2, which is over
3040
+ 2000
3041
+ times more than ours.
3042
+
3043
+ Appendix BAdditional Results
3044
+ B.1Experiment on Mistral model family
3045
+
3046
+ We choose the Llama2-based model family for the main experiments because there are more diverse variances built on different datasets and training recipes. There are many domain-specific models based on Llama-2, such as those for code, mathematics, healthcare, finance, law, and mental health. Importantly, a series of models have undergone continuous pre-training based on Llama-2, and a considerable portion of models trained from scratch have drawn inspiration from the architecture of Llama-2. While these models share the same architecture as Llama-2, their weights exhibit significant differences. Thus, we can thoroughly examine the effect of merging, mixture and Model-GLUE on different settings. To further evaluate our proposal on the Mistral model family, we have established a Mistral-based Which8 model zoo and replicated the experiments outlined in Section 5.2. From the result in Table 13 it can be seen that Mode-GLUE consistently outperform.
3047
+
3048
+ Table 13:Comparison between the best single model, Merging, Full Mixture and our Model-GLUE with Mistral model zoo. We highlight the better performance in bold.
3049
+ Model ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
3050
+ Best Single Model
3051
+ 67.24
3052
+ %
3053
+
3054
+ 79.01
3055
+ %
3056
+
3057
+ 61.77
3058
+ %
3059
+
3060
+ 63.15
3061
+ %
3062
+
3063
+ 35.98
3064
+ %
3065
+
3066
+ 39.00
3067
+ %
3068
+
3069
+ 57.69
3070
+ %
3071
+
3072
+ DARE
3073
+ 64.33
3074
+ %
3075
+
3076
+ 78.37
3077
+ %
3078
+
3079
+ 63.27
3080
+ %
3081
+
3082
+ 63.31
3083
+ %
3084
+
3085
+ 39.02
3086
+ %
3087
+
3088
+ 44.60
3089
+ %
3090
+
3091
+ 58.82
3092
+ %
3093
+
3094
+ TIES
3095
+ 63.74
3096
+ %
3097
+
3098
+ 77.90
3099
+ %
3100
+
3101
+ 60.90
3102
+ %
3103
+
3104
+ 49.13
3105
+ %
3106
+
3107
+ 34.76
3108
+ %
3109
+
3110
+ 39.40
3111
+ %
3112
+
3113
+ 54.30
3114
+ %
3115
+
3116
+ F-L-S
3117
+ 64.85
3118
+ %
3119
+
3120
+ 79.72
3121
+ %
3122
+
3123
+ 63.42
3124
+ %
3125
+
3126
+ 64.82
3127
+ %
3128
+
3129
+ 42.00
3130
+ %
3131
+
3132
+ 42.07
3133
+ %
3134
+
3135
+ 59.48
3136
+ %
3137
+
3138
+ Model-GLUE
3139
+ 65.02
3140
+ %
3141
+
3142
+ 78.85
3143
+ %
3144
+
3145
+ 64.39
3146
+ %
3147
+
3148
+ 65.50
3149
+ %
3150
+
3151
+ 44.60
3152
+ %
3153
+
3154
+ 42.68
3155
+ %
3156
+
3157
+ 60.18
3158
+ %
3159
+ B.2Model Merging
3160
+
3161
+ We present the specific results of Figure 4 in Table 15 and Table 16 and other results of Section 4.3 in Table 14, Table 17 and Table 18.
3162
+
3163
+ Table 14:Failure case of existing merging approaches when expanding the model zoo.
3164
+ Merging Method ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
3165
+ Single Model
3166
+ Llama-2-chat 54.10% 71.27% 47.28% 23.05% 17.00% 13.41% 37.68%
3167
+ Vicuna 53.75% 70.56% 49.78% 19.11% 6.00% 19.51% 36.45%
3168
+ CodeLlama 43.52% 65.11% 41.83% 17.06% 40.00% 33.70% 40.20%
3169
+ Merge Llama-2-chat and Vicuna
3170
+ Linear 54.27% 72.30% 50.72% 24.49% 20.80% 20.12% 40.45%
3171
+ Model Stock 54.61% 74.43% 47.44% 16.07% 22.40% 14.02% 38.16%
3172
+ SLERP 55.29% 72.45% 50.51% 24.87% 21.80% 20.12% 40.84%
3173
+ Task Arithmetic 54.27% 71.67% 49.95% 26.31% 21.40% 17.07% 40.11%
3174
+ DARE 54.35% 72.14% 50.38% 26.61% 21.00% 17.68% 40.36%
3175
+ TIES 52.65% 69.93% 49.84% 24.34% 17.60% 19.51% 38.98%
3176
+ Merge Llama-2-chat and CodeLlama
3177
+ Linear 45.05% 67.09% 39.03% 16.76% 36.60% 23.17% 37.95%
3178
+ Model Stock 50.34% 71.27% 41.06% 10.01% 15.40% 7.93% 32.67%
3179
+ SLERP 52.05% 71.43% 46.41% 18.95% 20.80% 18.90% 38.09%
3180
+ Task Arithmetic 44.97% 68.03% 38.83% 7.05% 10.60% 12.20% 30.28%
3181
+ DARE 38.91% 65.98% 31.90% 3.34% 15.00% 9.76% 27.48%
3182
+ TIES 21.67% 49.88% 25.25% 0.00% 0.00% 0.00% 16.13%
3183
+ Table 15:Comparison between different Heuristic Strategies.
3184
+ Heuristic Strategy ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
3185
+ Best Single Model 55.03% 73.72% 48.18% 24.03% 17.80% 13.41% 38.70%
3186
+ Which12
3187
+ Average 54.86% 73.48% 49.42% 32.98% 23.60% 21.34% 42.61%
3188
+ Coefficient 55.12% 73.64% 50.13% 39.35% 21.80% 21.34% 43.56%
3189
+ Similarity
3190
+
3191
+ 56.48% 73.32% 52.56% 37.91% 17.80% 20.12% 43.03%
3192
+ Similarity
3193
+
3194
+ 55.80% 71.74% 52.39% 47.99% 16.40% 15.85% 43.36%
3195
+ Which8
3196
+ Average 55.38% 74.11% 48.65% 34.42% 25.20% 23.17% 43.49%
3197
+ Coefficient 55.12% 73.64% 50.13% 39.35% 21.80% 21.34% 43.56%
3198
+ Similarity
3199
+
3200
+ 54.95% 73.64% 49.00% 43.75% 19.80% 11.59% 42.12%
3201
+ Similarity
3202
+
3203
+ 54.78% 72.30% 49.06% 47.23% 21.20% 15.85% 43.40%
3204
+ Which4
3205
+ Average 54.86% 73.16% 47.91% 37.00% 24.00% 21.34% 43.05%
3206
+ Coefficient 55.12% 74.03% 48.18% 41.93% 19.40% 14.63% 42.22%
3207
+ Similarity
3208
+
3209
+ 54.52% 73.24% 47.81% 41.77% 21.20% 20.73% 43.21%
3210
+ Similarity
3211
+
3212
+ 53.92% 73.56% 47.81% 48.45% 18.20% 10.98% 42.15%
3213
+ Table 16:Comparison between different merging methods.
3214
+ Merging Method ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
3215
+ Which12
3216
+ Linear 56.48% 73.56% 51.79% 36.01% 23.60% 20.12% 43.59%
3217
+ Task Arithmetic 51.54% 69.14% 51.07% 54.66% 1.80% 13.41% 40.27%
3218
+ DARE 51.19% 70.09% 51.03% 53.53% 6.80% 12.80% 40.91%
3219
+ TIES 53.75% 70.64% 52.77% 49.36% 17.20% 2.44% 41.03%
3220
+ Which8
3221
+ Linear 55.12% 73.64% 49.59% 40.64% 22.40% 18.90% 43.38%
3222
+ Task Arithmetic 52.65% 70.64% 48.11% 51.18% 19.80% 21.95% 44.05%
3223
+ DARE 52.56% 71.19% 49.00% 53.37% 14.80% 20.12% 43.51%
3224
+ TIES 50.26% 71.27% 48.58% 47.69% 18.40% 0.61% 39.47%
3225
+ Which4
3226
+ Linear 53.50% 73.01% 47.32% 45.79% 20.20% 15.85% 42.61%
3227
+ Task Arithmetic 52.73% 72.30% 46.81% 51.86% 18.20% 18.29% 43.36%
3228
+ DARE 51.45% 71.67% 45.61% 51.55% 16.60% 20.12% 42.83%
3229
+ TIES 50.51% 71.98% 46.62% 49.43% 16.40% 1.22% 39.36%
3230
+ Table 17:The impact of different group sizes on Evolutionary Strategy.
3231
+ Group Size ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
3232
+ Which12
3233
+ 1 56.14% 73.32% 51.50% 32.45% 24.00% 20.12% 42.92%
3234
+ 4 56.31% 73.72% 52.04% 33.43% 24.00% 18.90% 43.07%
3235
+ 8 56.83% 74.43% 53.01% 38.13% 21.20% 19.51% 43.85%
3236
+ 32 56.48% 73.56% 51.79% 36.01% 23.60% 20.12% 43.59%
3237
+ Which8
3238
+ 1 55.12% 74.11% 49.96% 31.69% 25.20% 19.51% 42.60%
3239
+ 4 56.06% 74.66% 50.04% 33.59% 24.20% 21.95% 43.42%
3240
+ 8 55.29% 73.88% 49.20% 40.56% 24.60% 18.90% 43.74%
3241
+ 32 55.12% 73.64% 49.59% 40.64% 22.40% 18.90% 43.38%
3242
+ Which4
3243
+ 1 54.61% 73.32% 47.63% 41.62% 23.60% 15.85% 42.77%
3244
+ 4 52.90% 73.32% 46.99% 43.06% 24.00% 20.73% 43.50%
3245
+ 8 54.01% 73.64% 47.39% 43.75% 22.40% 21.95% 43.86%
3246
+ 32 53.50% 73.01% 47.32% 45.79% 20.20% 15.85% 42.61%
3247
+ Table 18:More efficient merging strategy.
3248
+ Strategy ARC WinoGrande MMLU GSM8K MBPP HumanEval Average Round
3249
+ Which12
3250
+ Evo (Vanilla)
3251
+ 56.48
3252
+ %
3253
+
3254
+ 73.56
3255
+ %
3256
+
3257
+ 51.79
3258
+ %
3259
+
3260
+ 36.01
3261
+ %
3262
+
3263
+ 23.60
3264
+ %
3265
+
3266
+ 20.12
3267
+ %
3268
+
3269
+ 43.59
3270
+ %
3271
+
3272
+ 200
3273
+
3274
+ Evo (Heuristic)
3275
+ 55.29
3276
+ %
3277
+
3278
+ 72.85
3279
+ %
3280
+
3281
+ 49.96
3282
+ %
3283
+
3284
+ 40.56
3285
+ %
3286
+
3287
+ 22.80
3288
+ %
3289
+
3290
+ 18.29
3291
+ %
3292
+
3293
+ 43.29
3294
+ %
3295
+
3296
+ 127
3297
+
3298
+ Which8
3299
+ Evo (Vanilla)
3300
+ 55.12
3301
+ %
3302
+
3303
+ 73.64
3304
+ %
3305
+
3306
+ 49.59
3307
+ %
3308
+
3309
+ 40.64
3310
+ %
3311
+
3312
+ 22.40
3313
+ %
3314
+
3315
+ 18.90
3316
+ %
3317
+
3318
+ 43.38
3319
+ %
3320
+
3321
+ 200
3322
+
3323
+ Evo (Heuristic)
3324
+ 54.69
3325
+ %
3326
+
3327
+ 72.93
3328
+ %
3329
+
3330
+ 49.68
3331
+ %
3332
+
3333
+ 45.19
3334
+ %
3335
+
3336
+ 21.00
3337
+ %
3338
+
3339
+ 19.51
3340
+ %
3341
+
3342
+ 43.83
3343
+ %
3344
+
3345
+ 71
3346
+
3347
+ Which4
3348
+ Evo (Vanilla)
3349
+ 53.50
3350
+ %
3351
+
3352
+ 73.01
3353
+ %
3354
+
3355
+ 47.32
3356
+ %
3357
+
3358
+ 45.79
3359
+ %
3360
+
3361
+ 20.20
3362
+ %
3363
+
3364
+ 15.85
3365
+ %
3366
+
3367
+ 42.61
3368
+ %
3369
+
3370
+ 200
3371
+
3372
+ Evo (Heuristic)
3373
+ 54.52
3374
+ %
3375
+
3376
+ 73.56
3377
+ %
3378
+
3379
+ 47.74
3380
+ %
3381
+
3382
+ 40.71
3383
+ %
3384
+
3385
+ 23.20
3386
+ %
3387
+
3388
+ 21.95
3389
+ %
3390
+
3391
+ 43.61
3392
+ %
3393
+
3394
+ 69
3395
+ B.3Model Mixture
3396
+
3397
+ For Model Level Mixture, we use more fine-grained prompts to construct the router, and report the results in Table 19.
3398
+
3399
+ Table 19:Better prompt vector for the mixture of Llama-2-7b-chat and CrystalChat. We highlight the better performance in bold.
3400
+ Model ARC WinoGrande MMLU GSM8K MBPP HumanEval Average
3401
+ Best Single Model
3402
+ 52.05
3403
+ %
3404
+
3405
+ 69.46
3406
+ %
3407
+
3408
+ 50.77
3409
+ %
3410
+
3411
+ 27.22
3412
+ %
3413
+
3414
+ 39.60
3415
+ %
3416
+
3417
+ 35.98
3418
+ %
3419
+
3420
+ 45.85
3421
+ %
3422
+
3423
+ M-L-S
3424
+ 51.88
3425
+ %
3426
+
3427
+ 70.88
3428
+ %
3429
+
3430
+ 52.44
3431
+ %
3432
+
3433
+ 32.52
3434
+ %
3435
+
3436
+ 39.40
3437
+ %
3438
+
3439
+ 31.10
3440
+ %
3441
+
3442
+ 46.37
3443
+ %
3444
+ Report Issue
3445
+ Report Issue for Selection
3446
+ Generated by L A T E xml
3447
+ Instructions for reporting errors
3448
+
3449
+ We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
3450
+
3451
+ Click the "Report Issue" button.
3452
+ Open a report feedback form via keyboard, use "Ctrl + ?".
3453
+ Make a text selection and click the "Report Issue for Selection" button near your cursor.
3454
+ You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
3455
+
3456
+ Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
3457
+
3458
+ Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.