| base_model: | |
| - allura-org/L3.1-8b-RP-Ink | |
| - squ11z1/Hypnos-i1-8B | |
| - Locutusque/liberalis-cogitator-llama-3.1-8b-dpo | |
| library_name: transformers | |
| tags: | |
| - mergekit | |
| - merge | |
| # calm | |
| This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). | |
| ## Merge Details | |
| ### Merge Method | |
| This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [squ11z1/Hypnos-i1-8B](https://huggingface.co/squ11z1/Hypnos-i1-8B) as a base. | |
| ### Models Merged | |
| The following models were included in the merge: | |
| * [allura-org/L3.1-8b-RP-Ink](https://huggingface.co/allura-org/L3.1-8b-RP-Ink) | |
| * [Locutusque/liberalis-cogitator-llama-3.1-8b-dpo](https://huggingface.co/Locutusque/liberalis-cogitator-llama-3.1-8b-dpo) | |
| ### Configuration | |
| The following YAML configuration was used to produce this model: | |
| ```yaml | |
| base_model: squ11z1/Hypnos-i1-8B | |
| chat_template: llama3 | |
| dtype: float32 | |
| merge_method: sce | |
| modules: | |
| default: | |
| slices: | |
| - sources: | |
| - layer_range: [0, 32] | |
| model: allura-org/L3.1-8b-RP-Ink | |
| - layer_range: [0, 32] | |
| model: Locutusque/liberalis-cogitator-llama-3.1-8b-dpo | |
| - layer_range: [0, 32] | |
| model: squ11z1/Hypnos-i1-8B | |
| parameters: | |
| select_topk: 0.25 | |
| tokenizer: | |
| pad_to_multiple_of: 32 | |
| ``` | |