--- base_model: [] library_name: transformers tags: - mergekit - merge --- # prototype-0.4x232 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using /workspace/prototype-0.4x229 as a base. ### Models Merged The following models were included in the merge: * /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8 * /workspace/cache/models--Sao10K--70B-L3.3-Cirrus-x1/snapshots/31d7ca33f3098d1eabe6f87a2c5b5bde85b20f35 * /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c * /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8 parameters: weight: 0.2 density: 0.7 epsilon: 0.2 lambda: 1.1 - model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c parameters: weight: 0.2 density: 0.7 epsilon: 0.2 lambda: 1.1 - model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: weight: 0.2 density: 0.7 epsilon: 0.2 lambda: 1.1 - model: /workspace/cache/models--Sao10K--70B-L3.3-Cirrus-x1/snapshots/31d7ca33f3098d1eabe6f87a2c5b5bde85b20f35 parameters: weight: 0.2 density: 0.7 epsilon: 0.2 lambda: 1.1 - model: /workspace/prototype-0.4x229 parameters: weight: 0.2 density: 0.7 epsilon: 0.1 lambda: 1.0 base_model: /workspace/prototype-0.4x229 merge_method: della_linear tokenizer: source: base parameters: normalize: false pad_to_multiple_of: 8 int8_mask: true dtype: bfloat16 ```