Add GPTQ quant link
#3
by
GusPuffy
- opened
README.md
CHANGED
|
@@ -52,6 +52,7 @@ The model is available in quantized formats:
|
|
| 52 |
* **FP16**: https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3
|
| 53 |
* **GGUF**: https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3-GGUF
|
| 54 |
* **Bartowski's GGUF**: https://huggingface.co/bartowski/Llama-3.1-70B-ArliAI-RPMax-v1.3-GGUF
|
|
|
|
| 55 |
|
| 56 |
## Suggested Prompt Format
|
| 57 |
|
|
|
|
| 52 |
* **FP16**: https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3
|
| 53 |
* **GGUF**: https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3-GGUF
|
| 54 |
* **Bartowski's GGUF**: https://huggingface.co/bartowski/Llama-3.1-70B-ArliAI-RPMax-v1.3-GGUF
|
| 55 |
+
* **GPTQ (W4A16)**: https://huggingface.co/GusPuffy/Llama-3.1-70B-ArliAI-RPMax-v1.3-GPTQ
|
| 56 |
|
| 57 |
## Suggested Prompt Format
|
| 58 |
|