Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using /workspace/cache/models--Ppoyaa--MythoNemo-L3.1-70B-v1.0/snapshots/faaa4e992764eb4667b8f541dcf75ce8b7aaadcc as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
- model: /workspace/cache/models--marcelbinz--Llama-3.1-Centaur-70B/snapshots/7e42dbd2e967200cee4b3eafe523c154edc2e766
- model: /workspace/cache/models--AstroMLab--AstroSage-70B/snapshots/86496984f418ef5a6825f2c16983595e7f7d5930
- model: /workspace/cache/models--watt-ai--watt-tool-70B/snapshots/dbe19344ec6ee4b9e1636e9e6ce24fc6a85a725e
base_model: /workspace/cache/models--Ppoyaa--MythoNemo-L3.1-70B-v1.0/snapshots/faaa4e992764eb4667b8f541dcf75ce8b7aaadcc
merge_method: model_stock
tokenizer:
source: base
chat_template: llama3
int8_mask: true
pad_to_multiple_of: 8
dtype: bfloat16