DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling
Paper
•
2406.11617
•
Published
•
8
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DELLA merge method using /workspace/prototype-0.4x295 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: /workspace/cache/models--shisa-ai--shisa-v2-llama3.3-70b/snapshots/0a3080fbcbfbb0160c30db82b05be039453a4c01
parameters:
weight: 0.16
density: 0.7
epsilon: 0.2
- model: /workspace/cache/models--AstroMLab--AstroSage-70B/snapshots/86496984f418ef5a6825f2c16983595e7f7d5930
parameters:
weight: 0.16
density: 0.7
epsilon: 0.2
- model: /workspace/cache/models--LumiOpen--Llama-Poro-2-70B-Instruct/snapshots/ba7a467a544e2b8d944a8a8636120fd0fea9358d
parameters:
weight: 0.16
density: 0.7
epsilon: 0.2
- model: /workspace/cache/models--watt-ai--watt-tool-70B/snapshots/dbe19344ec6ee4b9e1636e9e6ce24fc6a85a725e
parameters:
weight: 0.16
density: 0.7
epsilon: 0.2
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
parameters:
weight: 0.16
density: 0.7
epsilon: 0.2
- model: /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce
parameters:
weight: 0.2
density: 0.5
epsilon: 0.25
base_model: /workspace/prototype-0.4x295
merge_method: della
parameters:
normalize: false
lambda: 1.05
chat_template: llama3
pad_to_multiple_of: 8
int8_mask: true
tokenizer:
source: base
dtype: float32