DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling
Paper
• 2406.11617 • Published
• 10
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Linear DELLA merge method using Tarek07/Dungeonmaster-V2.2-Expanded-LLaMa-70B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Sao10K/Llama-3.3-70B-Vulpecula-r1
parameters:
weight: 0.33
density: 0.7
- model: Steelskull/L3.3-Cu-Mai-R1-70b
parameters:
weight: 0.34
density: 0.7
- model: Tarek07/Dungeonmaster-V2.2-Expanded-LLaMa-70B
parameters:
weight: 0.33
density: 0.7
merge_method: della_linear
base_model: Tarek07/Dungeonmaster-V2.2-Expanded-LLaMa-70B
parameters:
epsilon: 0.2
lambda: 1.1
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: Sao10K/Llama-3.3-70B-Vulpecula-r1
pad_to_multiple_of: 8