Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,21 @@ datasets:
|
|
| 8 |
- PrimeIntellect/Intellect-2-RL-Dataset
|
| 9 |
---
|
| 10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
# INTELLECT-2
|
| 12 |
|
| 13 |
INTELLECT-2 is a 32 billion parameter language model trained through a reinforcement learning run leveraging globally distributed, permissionless GPU resources contributed by the community.
|
|
|
|
| 8 |
- PrimeIntellect/Intellect-2-RL-Dataset
|
| 9 |
---
|
| 10 |
|
| 11 |
+
**Please read [Running QwQ effectively](https://docs.unsloth.ai/basics/tutorial-how-to-run-qwq-32b-effectively) on sampling issues for QwQ based models.**
|
| 12 |
+
|
| 13 |
+
Or TLDR, use the below settings:
|
| 14 |
+
|
| 15 |
+
```bash
|
| 16 |
+
./llama.cpp/llama-cli -hf unsloth/INTELLECT-2-GGUF:Q4_K_XL -ngl 99 \
|
| 17 |
+
--temp 0.6 \
|
| 18 |
+
--repeat-penalty 1.1 \
|
| 19 |
+
--dry-multiplier 0.5 \
|
| 20 |
+
--min-p 0.00 \
|
| 21 |
+
--top-k 40 \
|
| 22 |
+
--top-p 0.95 \
|
| 23 |
+
--samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc"
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
# INTELLECT-2
|
| 27 |
|
| 28 |
INTELLECT-2 is a 32 billion parameter language model trained through a reinforcement learning run leveraging globally distributed, permissionless GPU resources contributed by the community.
|