Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,57 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
| 4 |
+
# ArliAI-RPMax-12B-v1.2
|
| 5 |
+
=====================================
|
| 6 |
+
|
| 7 |
+
## RPMax Series Overview
|
| 8 |
+
|
| 9 |
+
| [2B](https://huggingface.co/ArliAI/Gemma-2-2B-ArliAI-RPMax-v1.1) |
|
| 10 |
+
[3.8B](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) |
|
| 11 |
+
[8B](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) |
|
| 12 |
+
[9B](https://huggingface.co/ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1) |
|
| 13 |
+
[12B](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2) |
|
| 14 |
+
[20B](https://huggingface.co/ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1) |
|
| 15 |
+
[22B](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) |
|
| 16 |
+
[70B](https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1) |
|
| 17 |
+
|
| 18 |
+
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
|
| 19 |
+
|
| 20 |
+
Early tests by users mentioned that these models does not feel like any other RP models, having a different style and generally doesn't feel in-bred.
|
| 21 |
+
|
| 22 |
+
You can access the model at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/
|
| 23 |
+
|
| 24 |
+
We also have a models ranking page at https://www.arliai.com/models-ranking
|
| 25 |
+
|
| 26 |
+
Ask questions in our new Discord Server! https://discord.gg/MedJbtdd
|
| 27 |
+
|
| 28 |
+
## Model Description
|
| 29 |
+
|
| 30 |
+
ArliAI-RPMax-12B-v1.2 is a variant based on Mistral Nemo 12B Instruct 2407.
|
| 31 |
+
|
| 32 |
+
This is arguably the most successful RPMax model due to how Mistral is already very uncensored in the first place.
|
| 33 |
+
|
| 34 |
+
v1.2 update is a retrain using an incremental improvement of the RPMax dataset which dedups the dataset even more and better filtering to cutout irrelevant description text that came from card sharing sites.
|
| 35 |
+
|
| 36 |
+
### Training Details
|
| 37 |
+
|
| 38 |
+
* **Sequence Length**: 8192
|
| 39 |
+
* **Training Duration**: Approximately 2 days on 2x3090Ti
|
| 40 |
+
* **Epochs**: 1 epoch training for minimized repetition sickness
|
| 41 |
+
* **QLORA**: 64-rank 128-alpha, resulting in ~2% trainable weights
|
| 42 |
+
* **Learning Rate**: 0.00001
|
| 43 |
+
* **Gradient accumulation**: Very low 32 for better learning.
|
| 44 |
+
|
| 45 |
+
## Quantization
|
| 46 |
+
|
| 47 |
+
The model is available in quantized formats:
|
| 48 |
+
|
| 49 |
+
* **FP16**: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
|
| 50 |
+
* **GPTQ_Q4**: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-GPTQ_Q4
|
| 51 |
+
* **GPTQ_Q8**: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-GPTQ_Q8
|
| 52 |
+
* **GGUF**: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-GGUF
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
## Suggested Prompt Format
|
| 56 |
+
|
| 57 |
+
Mistral Instruct Prompt Format
|