auto-patch README.md
Browse files
README.md
CHANGED
|
@@ -37,6 +37,18 @@ more details, including on how to concatenate multi-part files.
|
|
| 37 |
|:-----|:-----|--------:|:------|
|
| 38 |
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.TQ1_0.gguf) | TQ1_0 | 0.2 | tighteR TERNANY packing |
|
| 39 |
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.TQ2_0.gguf) | TQ2_0 | 0.3 | faster ternany packing |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
| 42 |
types (lower is better):
|
|
|
|
| 37 |
|:-----|:-----|--------:|:------|
|
| 38 |
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.TQ1_0.gguf) | TQ1_0 | 0.2 | tighteR TERNANY packing |
|
| 39 |
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.TQ2_0.gguf) | TQ2_0 | 0.3 | faster ternany packing |
|
| 40 |
+
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.Q2_K.gguf) | Q2_K | 0.3 | |
|
| 41 |
+
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
|
| 42 |
+
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
|
| 43 |
+
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
|
| 44 |
+
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
|
| 45 |
+
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
|
| 46 |
+
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
|
| 47 |
+
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
|
| 48 |
+
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
|
| 49 |
+
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.Q6_K.gguf) | Q6_K | 0.5 | very good quality |
|
| 50 |
+
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
|
| 51 |
+
| [GGUF](https://huggingface.co/mradermacher/BitCPM4-0.5B-GGUF/resolve/main/BitCPM4-0.5B.f16.gguf) | f16 | 1.0 | 16 bpw, overkill |
|
| 52 |
|
| 53 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
| 54 |
types (lower is better):
|