prototype-0.4x202
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the NuSLERP merge method.
Models Merged
The following models were included in the merge:
- /workspace/cache/models--nvidia--Llama-3.1-Nemotron-70B-Instruct-HF/snapshots/031d4042f36adc1a52cca51b331d25cbe3cf1022
- /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
Configuration
The following YAML configuration was used to produce this model:
models:
- model: /workspace/cache/models--nvidia--Llama-3.1-Nemotron-70B-Instruct-HF/snapshots/031d4042f36adc1a52cca51b331d25cbe3cf1022
parameters:
weight: 0.75
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
parameters:
weight: 0.25
merge_method: nuslerp
tokenizer:
source: /workspace/cache/models--nvidia--Llama-3.1-Nemotron-70B-Instruct-HF/snapshots/031d4042f36adc1a52cca51b331d25cbe3cf1022
pad_to_multiple_of: 8
int8_mask: true
dtype: float32
out_dtype: bfloat16
- Downloads last month
- -