File size: 3,358 Bytes
185c6e5 1222938 185c6e5 1222938 185c6e5 1222938 185c6e5 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | ---
base_model:
- Sorawiz/MistralCreative-24B-Chat
- Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- ReadyArt/Forgotten-Abomination-24B-v4.0
- ReadyArt/Forgotten-Transgression-24B-v4.1
- ReadyArt/Gaslight-24B-v1.0
- ReadyArt/The-Omega-Directive-M-24B-v1.0
- anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF
library_name: transformers
tags:
- mergekit
- merge
---
# Chat Template
Mistral Instruct
```
{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ .Response }}<|im_end|>
```
ChatML
```
{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}
```
# GGUF
Thank you [mradermacher](https://huggingface.co/mradermacher) for creating the GGUF versions of this model.
* Static quants - [mradermacher/MistralCreative-24B-Instruct-GGUF](https://huggingface.co/mradermacher/MistralCreative-24B-Instruct-GGUF)
* Imatrix quants - [mradermacher/MistralCreative-24B-Instruct-i1-GGUF](https://huggingface.co/mradermacher/MistralCreative-24B-Instruct-i1-GGUF)
# Merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF](https://huggingface.co/anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF) as a base.
### Models Merged
The following models were included in the merge:
* [ReadyArt/The-Omega-Directive-M-24B-v1.0](https://huggingface.co/ReadyArt/The-Omega-Directive-M-24B-v1.0)
* [Sorawiz/MistralCreative-24B-Test-U](https://huggingface.co/Sorawiz/MistralCreative-24B-Test-U)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
name: Sorawiz/MistralCreative-24B-Test-E
merge_method: dare_ties
base_model: Sorawiz/MistralCreative-24B-Chat
models:
- model: Sorawiz/MistralCreative-24B-Chat
parameters:
weight: 0.20
- model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
parameters:
weight: 0.20
- model: ReadyArt/Forgotten-Transgression-24B-v4.1
parameters:
weight: 0.30
- model: ReadyArt/Forgotten-Abomination-24B-v4.0
parameters:
weight: 0.30
parameters:
density: 1
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/MistralCreative-24B-Test-U
merge_method: dare_ties
base_model: Sorawiz/MistralCreative-24B-Test-E
models:
- model: Sorawiz/MistralCreative-24B-Test-E
parameters:
weight: 0.3
- model: ReadyArt/Gaslight-24B-v1.0
parameters:
weight: 0.5
- model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
parameters:
weight: 0.2
parameters:
density: 0.70
tokenizer:
source: union
chat_template: auto
---
models:
- model: anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF
- model: Sorawiz/MistralCreative-24B-Test-U
parameters:
density: 1.00
weight: 1.00
- model: ReadyArt/The-Omega-Directive-M-24B-v1.0
parameters:
density: 1.00
weight: 1.00
merge_method: ties
base_model: anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF
parameters:
normalize: true
dtype: float32
```
|