FuseChat: Knowledge Fusion of Chat Models
Paper
•
2408.07990
•
Published
•
14
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SCE merge method using /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: /workspace/cache/models--Delta-Vector--Shimamura-70B/snapshots/1106f197a3ea1424512c30a8576bd718313b57c3
- model: /workspace/cache/models--TheDrummer--Anubis-70B-v1.1/snapshots/47ea1a3368e8d161b09acbc8c211ba4212e4b466
- model: /workspace/cache/models--Sao10K--L3.1-70B-Hanami-x1/snapshots/f054d970fe9119d0237ce97029e6f5b9fce630eb
- model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/f84e61554251d9355146537e39ba081c7b5580eb
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
base_model: /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83
select_topk: 0.24
merge_method: sce
tokenizer:
source: base
chat_template: llama3
pad_to_multiple_of: 8
int8_mask: true
dtype: float32