parameters guide
samplers guide
model generation
role play settings
quant selection
arm quants
iq quants vs q quants
optimal model setting
gibberish fixes
coherence
instructing following
quality generation
chat settings
quality settings
llamacpp server
llamacpp
lmstudio
sillytavern
koboldcpp
backyard
ollama
model generation steering
steering
model generation fixes
text generation webui
ggufs
exl2
full precision
quants
imatrix
neo imatrix
llama
llama-3
gemma
gemma2
gemma3
llama-2
llama-3.1
llama-3.2
mistral
Mixture of Experts
mixture of experts
mixtral
Update README.md
Browse files
README.md
CHANGED
|
@@ -254,7 +254,17 @@ LLAMACPP-SERVER EXE:
|
|
| 254 |
|
| 255 |
https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md
|
| 256 |
|
| 257 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 258 |
|
| 259 |
OTHER:
|
| 260 |
|
|
|
|
| 254 |
|
| 255 |
https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md
|
| 256 |
|
| 257 |
+
The Local LLM Settings Guide/Rant (covers a lot of parameters/samplers - lots of detail)
|
| 258 |
+
|
| 259 |
+
https://rentry.org/llm-settings
|
| 260 |
+
|
| 261 |
+
A Visual Guide of some top parameters / Samplers in action which you can play with and see how they interact:
|
| 262 |
+
|
| 263 |
+
https://artefact2.github.io/llm-sampling/index.xhtml
|
| 264 |
+
|
| 265 |
+
NOTE:
|
| 266 |
+
|
| 267 |
+
I have also added notes too in the sections below for almost all parameters, samplers, and advanced samplers as well.
|
| 268 |
|
| 269 |
OTHER:
|
| 270 |
|