Update README.md
Browse files
README.md
CHANGED
|
@@ -18,6 +18,41 @@ tags:
|
|
| 18 |
- **License:** apache-2.0
|
| 19 |
- **Finetuned from model :** royallab/MN-LooseCannon-12B-v2
|
| 20 |
|
| 21 |
-
|
| 22 |
|
| 23 |
-
[
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
- **License:** apache-2.0
|
| 19 |
- **Finetuned from model :** royallab/MN-LooseCannon-12B-v2
|
| 20 |
|
| 21 |
+
# Details
|
| 22 |
|
| 23 |
+
This model was trained on my own little dataset free of synthetic data, which focuses solely on storywriting and scenrio prompting (Example: `[ Scenario: bla bla bla; Tags: bla bla bla ]`),
|
| 24 |
+
|
| 25 |
+
I don't really recommend this model due to its nature and obvious flaws (rampant impersonation, stupid, etc...). It's a a one-trick pony and will be really rough for the average LLM user to handle.
|
| 26 |
+
|
| 27 |
+
Instead, I recommend you guys use [Magnum-Picaro-0.7-v2-12b](https://huggingface.co/Trappu/Magnum-Picaro-0.7-v2-12b). The idea was to have Magnum work as some sort of stabilizer to fix the issues that emerge from the lack of multiturn/smart data in Picaro's dataset. It worked, I think. I enjoy the outputs and it's smart enough to work with.
|
| 28 |
+
|
| 29 |
+
# Prompting
|
| 30 |
+
|
| 31 |
+
If for some reason, you still want to try this model over Magnum-Picaro, it was trained on chatml with no system prompts, so below is the recommended prompt formatting.
|
| 32 |
+
|
| 33 |
+
```
|
| 34 |
+
<|im_start|>user
|
| 35 |
+
bla bla bla<|im_end|>
|
| 36 |
+
<|im_start|>assistant
|
| 37 |
+
bla bla bla you!<|im_end|>
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
For SillyTavern users:
|
| 41 |
+
|
| 42 |
+
[Instruct template](https://firebasestorage.googleapis.com/v0/b/koios-academy.appspot.com/o/trappu%2FChatML%20custom%20Instruct%20template.json?alt=media&token=9142757f-811c-460c-ad0e-d04951b1687f)
|
| 43 |
+
|
| 44 |
+
[Context template](https://firebasestorage.googleapis.com/v0/b/koios-academy.appspot.com/o/trappu%2FChatML%20custom%20context%20template.json?alt=media&token=0926fc67-fa9f-4c86-ad16-8c7c4c8e0b64)
|
| 45 |
+
|
| 46 |
+
[Settings preset](https://firebasestorage.googleapis.com/v0/b/koios-academy.appspot.com/o/trappu%2FHigh%20temp%20-%20Min%20P%20(4).json?alt=media&token=ac569562-af11-4da1-83c1-d86b25bb4fe1)
|
| 47 |
+
|
| 48 |
+
The above settings are the ones I recommend.
|
| 49 |
+
|
| 50 |
+
Temp = 1.2
|
| 51 |
+
|
| 52 |
+
Min P = 0.1
|
| 53 |
+
|
| 54 |
+
DRY Rep Pen: Multiplier = 0.8, Base = 1.75, Allowed Length = 2, Penalty Range = 1024
|
| 55 |
+
|
| 56 |
+
Little guide on useful samplers and how to import settings presets and instruct/context templates and other stuff people might find useful [here](https://rentry.co/PygmalionFAQ#q-what-are-the-best-settings-for-rpadventurenarrationchatting)
|
| 57 |
+
|
| 58 |
+
Every other sampler neutralized.
|