Model Breadcrumbs: Scaling Multi-Task Model Merging with Sparse Masks
Paper
• 2312.06795 • Published
• 1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Breadcrumbs merge method using unsloth/Llama-3.3-70B-Instruct as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: unsloth/Llama-3.3-70B-Instruct
- model: Nohobby/AbominationSnowPig
parameters:
density: 0.4
gamma: 0.05
weight: 0.4
- model: TareksLab/Wordforger-V1b-LLaMa-70B
parameters:
density: 0.4
gamma: 0.07
weight: 0.4
- model: TareksLab/RolePlayer-V6-LLaMa-70B
parameters:
density: 0.5
gamma: 0.05
weight: 0.2
base_model: unsloth/Llama-3.3-70B-Instruct
merge_method: breadcrumbs
parameters:
int8_mask: true
rescale: true
normalize: false
dtype: bfloat16
tokenizer_source: base