Update README.md
Browse files
README.md
CHANGED
|
@@ -14,6 +14,9 @@ pipeline_tag: text-generation
|
|
| 14 |
|
| 15 |
## Llamacpp Quantizations of DeepSeek-V3-0324 (MLA version)
|
| 16 |
|
|
|
|
|
|
|
|
|
|
| 17 |
Original model: Adopting **BF16** & **Imatrix** from [unsloth/DeepSeek-R1-0528-GGUF](https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF).
|
| 18 |
|
| 19 |
All quants made with modification of llama.cpp based on [bartowski1182-llama.cpp](https://github.com/bartowski1182/llama.cpp).
|
|
|
|
| 14 |
|
| 15 |
## Llamacpp Quantizations of DeepSeek-V3-0324 (MLA version)
|
| 16 |
|
| 17 |
+
This page is going to be deprecated. For other quantized versions, please refer to [moxin-org/DeepSeek-R1-0528-Moxin-GGUF](https://huggingface.co/moxin-org/DeepSeek-R1-0528-Moxin-GGUF) for more details.
|
| 18 |
+
|
| 19 |
+
|
| 20 |
Original model: Adopting **BF16** & **Imatrix** from [unsloth/DeepSeek-R1-0528-GGUF](https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF).
|
| 21 |
|
| 22 |
All quants made with modification of llama.cpp based on [bartowski1182-llama.cpp](https://github.com/bartowski1182/llama.cpp).
|