File size: 1,864 Bytes
322627e
 
 
 
 
 
 
 
 
 
e43fed2
322627e
4cd850d
322627e
9b2824b
322627e
fdafca5
 
 
322627e
9b2824b
322627e
493f39f
 
9b2824b
 
 
 
 
 
322627e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b2824b
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
base_model:
- cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
- DoppelReflEx/MN-12B-Mimicore-GreenSnake
- MarinaraSpaghetti/NemoMix-Unleashed-12B
- IntervitensInc/Mistral-Nemo-Base-2407-chatml
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---
Version: [Miyuri](#) - [Yukina](https://huggingface.co/DoppelReflEx/MN-12B-FoxFrame-Yukina) - [Shinori](https://huggingface.co/DoppelReflEx/MN-12B-FoxFrame-Shinori)

# What is this?

A very nice merge series, to be real. I have test this and got the good result so far.

In my test character card, it's give me **an energetic, gyaru-like** girl, LOL. You should try it.

Good for RP,ERP.

PS: Sometimes, it have cgato/Nemo-12b-Humanize-KTO-Experimental-Latest but that ```<|im_end|>``` token will appear and you must write some word or reroll the message.

## Template? ChatML, of course!

<details>
  <summary>Merge Detail</summary>
  <p>
    ### Models Merged

The following models were included in the merge:
* [cgato/Nemo-12b-Humanize-KTO-Experimental-Latest](https://huggingface.co/cgato/Nemo-12b-Humanize-KTO-Experimental-Latest)
* [DoppelReflEx/MN-12B-Mimicore-GreenSnake](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-GreenSnake)
* [MarinaraSpaghetti/NemoMix-Unleashed-12B](https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
 - model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
   parameters:
     density: 0.9
     weight: 1
 - model: DoppelReflEx/MN-12B-Mimicore-GreenSnake
   parameters:
     density: 0.5
     weight: 0.7
 - model: MarinaraSpaghetti/NemoMix-Unleashed-12B
   parameters:
     density: 0.7
     weight: 0.5
merge_method: dare_ties
base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
tokenizer_source: base

```
  </p>
</details>