FuseChat: Knowledge Fusion of Chat Models
Paper
• 2408.07990 • Published
• 14
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SCE merge method using TareksLab/Braniac-V3-LLaMa-70B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
parameters:
weight: 0.25
density: 0.5
select_topk: 0.5
lambda: 1.0
- model: Sao10K/L3-70B-Euryale-v2.1
parameters:
weight: 0.25
density: 0.5
select_topk: 0.5
lambda: 1.0
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.25
density: 0.5
select_topk: 0.5
lambda: 1.0
- model: TareksLab/Braniac-V3-LLaMa-70B
parameters:
weight: 0.25
density: 0.5
select_topk: 0.5
lambda: 1.0
base_model: TareksLab/Braniac-V3-LLaMa-70B
merge_method: sce
parameters:
normalize: false
tokenizer:
source: union
chat_template: llama3
dtype: bfloat16