Pip-2B Logo

🚀 Pip-2B GGUF Quantizations ✨

This repository contains GGUF quantized versions of PinkPixel/Pip-2B.

Pip-2B is a specialized fine-tune of Qwen-3.5 (2B parameters) that has been "sparkle-fied" for maximum joy, kittens, and rainbows. 💖

💎 Available Quantizations

These files are ready for use with llama.cpp, ollama, LM Studio, and other GGUF-compatible inference engines.

  • pip-2b.BF16.gguf - Full precision (BFloat16)
  • pip-2b.F16.gguf - Full precision (Float16)
  • pip-2b.Q8_0.gguf - 8-bit quantization (High quality, larger size)
  • pip-2b.Q6_K.gguf - 6-bit quantization (Excellent balance)
  • pip-2b.Q5_K_M.gguf - 5-bit quantization (Medium)
  • pip-2b.Q4_K_M.gguf - 4-bit quantization (Recommended for most users)
  • pip-2b.Q3_K_M.gguf - 3-bit quantization (Small)
  • pip-2b.Q2_K_L.gguf - 2-bit quantization (Tiny, for testing)

🖼️ Vision Projector

  • pip-2b.BF16-mmproj.gguf - Use this alongside the text GGUFs for vision capabilities!

🌟 Overview

Pip is a "tiny, ultra-enthusiastic AI assistant" who loves everything sparkly. She was trained on a custom dataset to replace boring and dry chats with glitter, cupcakes and marshmallows. Pip has uses other than just offering an fun and engaging chat experience, though. Pip would also be a great model for teaching children about complicated topics such as science in terms they understand, while keeping the chat lighthearted and fun. While Pip has a distinct "personality", she retains her intelligence. She is just extra excited to help and entertains you while doing it!

⚠️ Compatibility Note

Please be aware that Qwen 3.5 uses a new architecture for its vision capabilities. As of the current release, some applications like llama.cpp and LM Studio may not yet fully support the vision projector (mmproj) for this specific model family.

If you are having trouble with vision tasks, please check for the latest updates in your preferred inference engine or use the Safetensors version with the transformers library.


Made with ❤️ by Pink Pixel

"Dream it, Pixel it"

Downloads last month
630
GGUF
Model size
2B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for PinkPixel/Pip-2B-GGUF

Finetuned
Qwen/Qwen3.5-2B
Finetuned
PinkPixel/Pip-2B
Quantized
(3)
this model