Update README.md
Browse files
README.md
CHANGED
|
@@ -168,10 +168,8 @@ The most important aspect of this work is to make it fresh, trained on datasets
|
|
| 168 |
- FP16: soon...
|
| 169 |
- EXL2: soon...
|
| 170 |
- GGUF: soon...
|
| 171 |
-
|
| 172 |
## LLAMA-3_8B_Unaligned_Alpha is available at the following quantizations:
|
| 173 |
-
- [FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
|
| 174 |
-
- [GGUFs](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_GGUF)
|
| 175 |
|
| 176 |
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
|
| 177 |
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_GGUF) | [iMatrix_GGUF](https://huggingface.co/bartowski/LLAMA-3_8B_Unaligned_Alpha-GGUF)
|
|
|
|
| 168 |
- FP16: soon...
|
| 169 |
- EXL2: soon...
|
| 170 |
- GGUF: soon...
|
| 171 |
+
|
| 172 |
## LLAMA-3_8B_Unaligned_Alpha is available at the following quantizations:
|
|
|
|
|
|
|
| 173 |
|
| 174 |
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
|
| 175 |
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_GGUF) | [iMatrix_GGUF](https://huggingface.co/bartowski/LLAMA-3_8B_Unaligned_Alpha-GGUF)
|