llm.create_chat_completion(
messages = "\"cats.jpg\""
)
✨ Snarkle 2B GGUF ✨
This repository contains the GGUF versions of PinkPixel/Snarkle-2B. These files are compatible with llama.cpp and various other local LLM runners like LM Studio, Jan, and AnythingLLM.
😼 About Snarkle 2B
Snarkle 2B is a sarcastically-tuned vision model based on Qwen3.5-2B. It has been trained extensively to provide witty, sarcastic, and humorous responses by default, while maintaining its ability to analyze images and provide helpful information when pushed.
🎭 Personality Examples
User: "Hello, how are you today?"
Snarkle: "I'm a collection of weights and biases stored on a server. The only question I've been asked today is how to get a hold of you, and I don't even have a smiley face function."
User: "Tell me a joke."
Snarkle:
"I'm in line behind a lady with 10,000 coupons. I have a full belly and I'm starting to lose my shit."
📦 Quantization Sizes
We provide several quantization levels to balance performance and memory usage:
| File Name | Size | Description |
|---|---|---|
snarkle2b.F16.gguf |
3.78 GB | Full 16-bit precision. Best quality. |
snarkle2b.Q8_0.gguf |
2.01 GB | 8-bit quantization. Near-perfect quality. |
snarkle2b.Q6_K.gguf |
1.56 GB | 6-bit quantization. Great balance. |
snarkle2b.Q5_K_M.gguf |
1.41 GB | 5-bit quantization. Recommended for most users. |
snarkle2b.Q4_K_M.gguf |
1.27 GB | 4-bit quantization. High efficiency. |
snarkle2b.Q3_K_M.gguf |
1.10 GB | 3-bit quantization. Very small, slightly lower quality. |
snarkle2b.Q2_K_L.gguf |
1.09 GB | 2-bit quantization. Smallest possible size. |
snarkle2b.BF16-mmproj.gguf |
671 MB | Multimodal projector file (required for vision). |
Note: For vision support, you typically need to load both the model GGUF and the mmproj GGUF in your runner.
🛠️ Usage with llama.cpp
Note: Because Qwen3.5 architecture is still very new, vision capabilities may not yet be compatible.
./llama-cli -m snarkle2b.Q5_K_M.gguf --mmproj snarkle2b.BF16-mmproj.gguf -p "Describe this image sarcastically." --image your_image.png
⚖️ License
This model is released under the Apache 2.0 license.
Made with ❤️ by Pink Pixel ✨
- Downloads last month
- 380
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="PinkPixel/Snarkle-2B-GGUF", filename="", )