Resolving Interference When Merging Models
Paper • 2306.01708 • Published • 18
This is a merged pre-trained language model created using the TIES merge method. It is based on the microsoft/Phi-3.5-mini-instruct model and incorporates the knowledge and capabilities of the nbeerbower/phi3.5-gutenberg-4B and ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1 models.
This is a merge of pre-trained language models created using mergekit.
This model was merged using the TIES merge method using microsoft/Phi-3.5-mini-instruct as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1
parameters:
weight: 1
- model: nbeerbower/phi3.5-gutenberg-4B
parameters:
weight: 1
merge_method: ties
base_model: microsoft/Phi-3.5-mini-instruct
parameters:
density: 1
normalize: true
int8_mask: true
dtype: bfloat16
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 25.29 |
| IFEval (0-Shot) | 52.28 |
| BBH (3-Shot) | 35.45 |
| MATH Lvl 5 (4-Shot) | 6.19 |
| GPQA (0-shot) | 10.85 |
| MuSR (0-shot) | 15.80 |
| MMLU-PRO (5-shot) | 31.18 |