Update README.md
Browse files
README.md
CHANGED
|
@@ -24,4 +24,14 @@ this is a continuation of the "tuxsentience" series made by [@GrainWare](https:/
|
|
| 24 |
- `min_p = 0.00` (llama.cpp's default is 0.1)
|
| 25 |
- **`top_p = 0.95`**
|
| 26 |
- `presence_penalty = 0.0 to 2.0` (llama.cpp default turns it off, but to reduce repetitions, you can use this) Try 1.0 for example.
|
| 27 |
-
- Supports up to `262,144` context natively but you can set it to `32,768` tokens for less RAM use
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
- `min_p = 0.00` (llama.cpp's default is 0.1)
|
| 25 |
- **`top_p = 0.95`**
|
| 26 |
- `presence_penalty = 0.0 to 2.0` (llama.cpp default turns it off, but to reduce repetitions, you can use this) Try 1.0 for example.
|
| 27 |
+
- Supports up to `262,144` context natively but you can set it to `32,768` tokens for less RAM use
|
| 28 |
+
|
| 29 |
+
## Disclaimer
|
| 30 |
+
|
| 31 |
+
Graig can be prone to saying offensive statements in rare circumstances due to the unpredictability of LLMs. These do not reflect our opinions/views and are a byproduct we are trying to avoid.
|
| 32 |
+
Newer graig models (such as this one) are significantly less prone to this, however if you do not setup settings correctly or do not prompt right this may still occur.
|
| 33 |
+
If you find Graig to be saying offensive statements during normal circumstances please either open a community post on this model or email me at `electron271@allthingslinux.org`.
|
| 34 |
+
|
| 35 |
+
In public deployments such as on Discord, please setup a filter using something such as https://github.com/cherryl1k/llmcordplus.
|
| 36 |
+
|
| 37 |
+
(Normal circumstances is defined as using the recommended settings and talking to graig in a non aggressive manner.)
|