Resolving Interference When Merging Models
Paper
• 2306.01708 • Published
• 17
This merge provides a baseline for performance when the instruct model is merged on the base. It follows Rombodawg's merge method on Qwen models, and should prove if it works with Llama models. Running hypothesis is that the IFEval benchmark will get nuked. A success will be little to no performance change over the vanilla instruct model.
This model was merged using the TIES merge method using unsloth/Meta-Llama-3.1-8B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: unsloth/Meta-Llama-3.1-8B
dtype: bfloat16
merge_method: ties
parameters:
density: 1.0
weight: 1.0
slices:
- sources:
- layer_range: [0, 32]
model: unsloth/Meta-Llama-3.1-8B-Instruct
parameters:
density: 1.0
weight: 1.0
- layer_range: [0, 32]
model: unsloth/Meta-Llama-3.1-8B
tokenizer_source: unsloth/Meta-Llama-3.1-8B-Instruct
Detailed results can be found here! Summarized results can be found here!
| Metric | % Value |
|---|---|
| Avg. | 24.81 |
| IFEval (0-Shot) | 54.24 |
| BBH (3-Shot) | 29.77 |
| MATH Lvl 5 (4-Shot) | 20.02 |
| GPQA (0-shot) | 5.93 |
| MuSR (0-shot) | 8.04 |
| MMLU-PRO (5-shot) | 30.89 |