updated readme
Browse files
README.md
CHANGED
|
@@ -10,9 +10,8 @@ tags:
|
|
| 10 |
|
| 11 |
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
|
| 12 |
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
- For multimodal models: **llama-mtmd-cli** **-m** model_name.gguf **--mmproj** mmproj_file.gguf
|
| 16 |
|
| 17 |
## Available Model files:
|
| 18 |
- `llama-3.2-1b-instruct.Q3_K_S.gguf`
|
|
|
|
| 10 |
|
| 11 |
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
|
| 12 |
|
| 13 |
+
Super tiny version of Llama's 1 B parameter model quantized using the lowest precision Unsloth offers. Training this one on junk data and destroying the weights should fully lobotomize it, but it honeslty works a little too well for being around ~1GB.
|
| 14 |
+
Shoutout Unsloth's quantization magic I guess...
|
|
|
|
| 15 |
|
| 16 |
## Available Model files:
|
| 17 |
- `llama-3.2-1b-instruct.Q3_K_S.gguf`
|