File size: 1,668 Bytes
b7ec319 8619978 b7ec319 7faa9c0 b7ec319 58ae8f7 b7ec319 44ef103 7faa9c0 b7ec319 58ae8f7 7faa9c0 b7ec319 44a7389 b7ec319 7faa9c0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: cc-by-nc-4.0
base_model:
- crestf411/MN-Slush
- IntervitensInc/Mistral-Nemo-Base-2407-chatml
- DoppelReflEx/MN-12B-Mimicore-GreenSnake
- cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
library_name: transformers
tags:
- mergekit
- merge
---
# What is this?
Previous name was WhiteSnake-V2, but the eval scores is not good, so I decide to rename it. Very good in creative writing and RP, ERP. Not good in Math.
It's main goal is to break the origin WhiteSnake in eval and real usecase, but nothing too good, just decent.
GGUF, thank mradermacher a lots: https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4-GGUF
My own Q6_K: https://huggingface.co/DoppelReflEx/MN-12B-WolFrame-Q6_K-GGUF
<details>
<summary>Merge Details</summary>
<p>
### Models Merged
The following models were included in the merge:
* [crestf411/MN-Slush](https://huggingface.co/crestf411/MN-Slush)
* [DoppelReflEx/MN-12B-Mimicore-GreenSnake](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-GreenSnake)
* [cgato/Nemo-12b-Humanize-KTO-Experimental-Latest](https://huggingface.co/cgato/Nemo-12b-Humanize-KTO-Experimental-Latest)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
parameters:
density: 0.9
weight: 1
- model: DoppelReflEx/MN-12B-Mimicore-GreenSnake
parameters:
density: 0.6
weight: 0.8
- model: crestf411/MN-Slush
parameters:
density: 0.7
weight: 0.5
merge_method: dare_ties
base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
tokenizer_source: base
```
</p>
</details>
|