Update README.md
Browse files
README.md
CHANGED
|
@@ -84,6 +84,6 @@ language:
|
|
| 84 |
| [SmolVLM2-2.2B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/SmolVLM2-2.2B-Instruct-GGUF/blob/main/SmolVLM2-2.2B-Instruct-Q6_K.gguf) | Q6_K | 6 | 1.49 GB| very large, extremely low quality loss |
|
| 85 |
| [SmolVLM2-2.2B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/SmolVLM2-2.2B-Instruct-GGUF/blob/main/SmolVLM2-2.2B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 1.93 GB| very large, extremely low quality loss - not recommended |
|
| 86 |
| [SmolVLM2-2.2B-Instruct-f16.gguf](https://huggingface.co/second-state/SmolVLM2-2.2B-Instruct-GGUF/blob/main/SmolVLM2-2.2B-Instruct-f16.gguf) | f16 | 16 | 3.63 GB| |
|
| 87 |
-
| [SmolVLM2-2.2B-Instruct-mmproj-f16.gguf](https://huggingface.co/second-state/SmolVLM2-2.2B-Instruct-GGUF/blob/main/SmolVLM2-2.2B-Instruct-mmproj-f16.gguf) | f16 | 16 |
|
| 88 |
|
| 89 |
*Quantized with llama.cpp b5501*
|
|
|
|
| 84 |
| [SmolVLM2-2.2B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/SmolVLM2-2.2B-Instruct-GGUF/blob/main/SmolVLM2-2.2B-Instruct-Q6_K.gguf) | Q6_K | 6 | 1.49 GB| very large, extremely low quality loss |
|
| 85 |
| [SmolVLM2-2.2B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/SmolVLM2-2.2B-Instruct-GGUF/blob/main/SmolVLM2-2.2B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 1.93 GB| very large, extremely low quality loss - not recommended |
|
| 86 |
| [SmolVLM2-2.2B-Instruct-f16.gguf](https://huggingface.co/second-state/SmolVLM2-2.2B-Instruct-GGUF/blob/main/SmolVLM2-2.2B-Instruct-f16.gguf) | f16 | 16 | 3.63 GB| |
|
| 87 |
+
| [SmolVLM2-2.2B-Instruct-mmproj-f16.gguf](https://huggingface.co/second-state/SmolVLM2-2.2B-Instruct-GGUF/blob/main/SmolVLM2-2.2B-Instruct-mmproj-f16.gguf) | f16 | 16 | 872 MB| |
|
| 88 |
|
| 89 |
*Quantized with llama.cpp b5501*
|