Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper • 2311.03099 • Published • 33
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using Qwen/Qwen3-32B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Qwen/Qwen3-32B
- model: LLMcompe-Team-Watanabe/Qwen3-32B-openmathreasoning-sft
parameters:
density: 0.70 # 高密度で数学を強化
weight: 0.30
- model: LLMcompe-Team-Watanabe/Qwen3-32B-textbookreasoning-UGPhysics-AoPsInstruct-sft
parameters:
density: 0.60 # 中密度で物理・生物を追加
weight: 0.25
- model: LLMcompe-Team-Watanabe/Qwen3-32B-sft-HIS-Chem-Engineering-45k-1ep-lr5e6-4096
parameters:
density: 0.45 # 低密度で化学・工学を補完
weight: 0.20
- model: LLMcompe-Team-Watanabe/Qwen3-32B-textbookreasoning-UGPhysics-AoPsInstruct-openmathreasoning-sft
parameters:
density: 0.40
weight: 0.10
- model: Qwen/Qwen3-32B
parameters:
density: 0.53
weight: 0.15
merge_method: dare_ties
base_model: Qwen/Qwen3-32B
parameters:
int8_mask: true
normalize: false
dtype: bfloat16