FuseChat: Knowledge Fusion of Chat Models
Paper
•
2408.07990
•
Published
•
14
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SCE merge method using /workspace/cache/models--TheDrummer--L3.3-Interleaved-Upscale-105B/snapshots/dc1c192564ddf43133a71a7bdc8e6e91c69a2835 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: /workspace/cache/models--TheDrummer--L3.3-Interleaved-Upscale-105B/snapshots/dc1c192564ddf43133a71a7bdc8e6e91c69a2835
chat_template: llama3
dtype: float32
merge_method: sce
modules:
default:
slices:
- sources:
- layer_range: [0, 120]
model: /workspace/hydro2
parameters:
select_topk: 0.3
- layer_range: [0, 120]
model: /workspace/nemo2
parameters:
select_topk: 0.3
- layer_range: [0, 120]
model: /workspace/herme2
parameters:
select_topk: 0.35
- layer_range: [0, 120]
model: /workspace/cache/models--bruhzair--ignore-merge-17/snapshots/bd0af76a6bc4d9ae4bab5fa6b50e6545e6f3fd4f
parameters:
select_topk: 0.25
- layer_range: [0, 120]
model: /workspace/deep2
parameters:
select_topk: 0.3
- layer_range: [0, 120]
model: /workspace/cache/models--TheDrummer--L3.3-Interleaved-Upscale-105B/snapshots/dc1c192564ddf43133a71a7bdc8e6e91c69a2835
parameters:
select_topk: 0.15
out_dtype: bfloat16
parameters:
int8_mask: 1.0
tokenizer:
source: base