Update README.md
Browse files
README.md
CHANGED
|
@@ -59,12 +59,14 @@ You can play and set for your needs, eg 8-snippets a 2048t, or 28-snippets a 512
|
|
| 59 |
<li>16000t (~12000words) ~1.5GB VRAM usage</li>
|
| 60 |
<li>32000t (~24000words) ~3GB VRAM usage</li>
|
| 61 |
</ul>
|
|
|
|
| 62 |
here is an tokenizer calculator<br>
|
| 63 |
- https://quizgecko.com/tools/token-counter <br>
|
| 64 |
and a Vram calculator - (you need the original model link NOT the GGUF)<br>
|
| 65 |
- https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator<br>
|
| 66 |
|
| 67 |
<br>
|
|
|
|
| 68 |
...
|
| 69 |
<br>
|
| 70 |
|
|
|
|
| 59 |
<li>16000t (~12000words) ~1.5GB VRAM usage</li>
|
| 60 |
<li>32000t (~24000words) ~3GB VRAM usage</li>
|
| 61 |
</ul>
|
| 62 |
+
<br>
|
| 63 |
here is an tokenizer calculator<br>
|
| 64 |
- https://quizgecko.com/tools/token-counter <br>
|
| 65 |
and a Vram calculator - (you need the original model link NOT the GGUF)<br>
|
| 66 |
- https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator<br>
|
| 67 |
|
| 68 |
<br>
|
| 69 |
+
QwQ-LCoT- (7/14b) - https://huggingface.co/mradermacher/QwQ-LCoT-14B-Conversational-GGUF<br>
|
| 70 |
...
|
| 71 |
<br>
|
| 72 |
|