Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
• 2311.03099 • Published
• 30
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using TareksLab/M-BASE-SCE as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: TareksLab/M-MERGE4
parameters:
weight: 0.15
density: 0.53
- model: TareksLab/M-MERGE3
parameters:
weight: 0.20
density: 0.53
- model: TareksLab/M-MERGE2
parameters:
weight: 0.25
density: 0.53
- model: TareksLab/M-MERGE1
parameters:
weight: 0.30
density: 0.53
- model: TareksLab/M-BASE-SCE
parameters:
weight: 0.10
density: 0.53
merge_method: dare_ties
base_model: TareksLab/M-BASE-SCE
parameters:
normalize: false
out_dtype: bfloat16
tokenizer:
source: TareksLab/M-TOKENIZER-SCE