FuseChat: Knowledge Fusion of Chat Models
Paper
•
2408.07990
•
Published
•
14
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SCE merge method using /workspace/prototype-0.4x295 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Diamond/snapshots/a7dfb66b4469a4c9ca07ff28bccc73a44797e76c
- model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
- model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.0/snapshots/501b61987a44172bf2d68365893fcd13081ad4db
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
base_model: /workspace/prototype-0.4x295
select_topk: 0.17
merge_method: sce
tokenizer:
source: base
chat_template: llama3
pad_to_multiple_of: 8
int8_mask: true
dtype: float32