generation
chat settings
sampling
sampler_config
sampling-strategies
parameters guide
samplers guide
decoding
nucleus-sampling
optimized
optimization
experimentation
role play settings
generation-features
optimal model setting
coherence
steering
high_quality
top-k
top-p
temperature
repetition-penalty
sillytavern
koboldcpp
mistral
gemma
llama
gpt2
Small changes to README.md
Browse filesFilled out more info for V1
README.md
CHANGED
|
@@ -129,7 +129,7 @@ Eta helps to increase the frequency of temperature updates; smaller - stable and
|
|
| 129 |
Never managed to get stable enough results for very long period of progression, so I personally avoid it and use strict settings instead.
|
| 130 |
</details>
|
| 131 |
<details><summary>Smoothing Curve (new):</summary>
|
| 132 |
-
Dynamically adjusts the
|
| 133 |
|
| 134 |
Has stronger effect with higher Repetition Penalty.
|
| 135 |
|
|
@@ -159,7 +159,9 @@ Probability - the chance for Threshold to cutoff the desired most likely tokens.
|
|
| 159 |
|
| 160 |
Very fine with very good creativity, level of details, emotional connections.
|
| 161 |
|
| 162 |
-
Might confuse things like character names, species and etc (due to high Temperature and not low enough Top-A). **In such cases lower Repetition
|
|
|
|
|
|
|
| 163 |
</details>
|
| 164 |
<details><summary>V2 **-INSANE-DETAIL/ATTENTION-**</summary>
|
| 165 |
<img src="https://gitlab.com/Azuro721/trueperfect-ai/-/raw/main/PERF2.png" style="float:right; width:200px; height:300px; padding:10px;">
|
|
@@ -211,7 +213,7 @@ Temperature 2.4 with Top-K 206 and TFS 0.8413 will output more attentive results
|
|
| 211 |
|
| 212 |
Temperature 1.2 with Top-K 206 and TFS 0.8413 will output slightly more attentive results, with even less variety, emotions and "surprising" moments.
|
| 213 |
</details>
|
| 214 |
-
<details><summary>Repetition
|
| 215 |
Base value: 1.12082, which will output more creative, emotional, varied, smart and "exciting" results. But tends to have issues with asterisks and quotation marks. Good as an **assistant**. Similar to 1.02612, but with more creativity, less descriptions, faster pace, more interesting responses across multiple characters simultaneously.
|
| 216 |
|
| 217 |
1.121: an alternative variant of 1.12082, which might fix issues with overly descriptive outputs, but 1.12082 is preferred and context and/or initial input (first) should be adjusted instead.
|
|
|
|
| 129 |
Never managed to get stable enough results for very long period of progression, so I personally avoid it and use strict settings instead.
|
| 130 |
</details>
|
| 131 |
<details><summary>Smoothing Curve (new):</summary>
|
| 132 |
+
Dynamically adjusts the Penalty, temperature, probability to avoid sudden changes; 1 and lower.
|
| 133 |
|
| 134 |
Has stronger effect with higher Repetition Penalty.
|
| 135 |
|
|
|
|
| 159 |
|
| 160 |
Very fine with very good creativity, level of details, emotional connections.
|
| 161 |
|
| 162 |
+
Might confuse things like character names, species and etc (due to high Temperature and not low enough Top-A). **In such cases lower Repetition Penalty to 1.02612**
|
| 163 |
+
|
| 164 |
+
Top-A 0.043725 might work well only with Repetition Penalty 1.02612, but better with **V2**
|
| 165 |
</details>
|
| 166 |
<details><summary>V2 **-INSANE-DETAIL/ATTENTION-**</summary>
|
| 167 |
<img src="https://gitlab.com/Azuro721/trueperfect-ai/-/raw/main/PERF2.png" style="float:right; width:200px; height:300px; padding:10px;">
|
|
|
|
| 213 |
|
| 214 |
Temperature 1.2 with Top-K 206 and TFS 0.8413 will output slightly more attentive results, with even less variety, emotions and "surprising" moments.
|
| 215 |
</details>
|
| 216 |
+
<details><summary>Repetition Penalty:</summary>
|
| 217 |
Base value: 1.12082, which will output more creative, emotional, varied, smart and "exciting" results. But tends to have issues with asterisks and quotation marks. Good as an **assistant**. Similar to 1.02612, but with more creativity, less descriptions, faster pace, more interesting responses across multiple characters simultaneously.
|
| 218 |
|
| 219 |
1.121: an alternative variant of 1.12082, which might fix issues with overly descriptive outputs, but 1.12082 is preferred and context and/or initial input (first) should be adjusted instead.
|