Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
β’
2311.03099
β’
Published
β’
30
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using jambroz/sixtyoneeighty-7b as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: jambroz/sixtyoneeighty-7b
dtype: bfloat16
merge_method: dare_ties
models:
- model: jambroz/sixtyoneeighty-7b
- model: mlabonne/UltraMerge-7B
parameters:
density: '0.53'
weight: '0.4'
- model: HuggingFaceH4/mistral-7b-anthropic
parameters:
density: '0.53'
weight: '0.3'
- model: jambroz/FNCARL-7b
parameters:
density: '0.53'
weight: '0.3'
parameters:
int8_mask: true