--- base_model: [] library_name: transformers tags: - mergekit - merge --- # prototype-0.4x296 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Multi-SLERP](https://goddard.blog/posts/multislerp-wow-what-a-cool-idea) merge method using /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce as a base. ### Models Merged The following models were included in the merge: * /workspace/cache/models--Delta-Vector--Austral-70B-Winton/snapshots/0b84421cb68e7c69848e6265a1d36dcd8957d44a * /workspace/cache/models--nvidia--Llama-3.1-Nemotron-70B-Instruct-HF/snapshots/031d4042f36adc1a52cca51b331d25cbe3cf1022 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/cache/models--nvidia--Llama-3.1-Nemotron-70B-Instruct-HF/snapshots/031d4042f36adc1a52cca51b331d25cbe3cf1022 parameters: weight: [0.5] - model: /workspace/cache/models--Delta-Vector--Austral-70B-Winton/snapshots/0b84421cb68e7c69848e6265a1d36dcd8957d44a parameters: weight: [0.5] base_model: /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce merge_method: multislerp tokenizer: source: base chat_template: llama3 parameters: normalize_weights: false eps: 1e-8 pad_to_multiple_of: 8 int8_mask: true dtype: bfloat16 ```