Resolving Interference When Merging Models
Paper
•
2306.01708
•
Published
•
15
This is a merge of pre-trained language models created using mergekit.
This model was merged using the TIES merge method using CultriX/NeuralTrix-7B-dpo as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: CultriX/NeuralTrix-7B-dpo # no parameters necessary for base model
- model: mlabonne/AlphaMonarch-7B
parameters:
density: 0.5 # fraction of weights in differences from the base model to retain
weight: # weight gradient
- filter: mlp
value: 0.5
- value: 0
- model: bardsai/jaskier-7b-dpo-v5.6
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: CultriX/NeuralTrix-7B-dpo
parameters:
normalize: true
int8_mask: true
dtype: float16
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 76.32 |
| AI2 Reasoning Challenge (25-Shot) | 72.78 |
| HellaSwag (10-Shot) | 89.17 |
| MMLU (5-Shot) | 64.44 |
| TruthfulQA (0-shot) | 78.13 |
| Winogrande (5-shot) | 84.93 |
| GSM8k (5-shot) | 68.46 |