Anima - GGUF Quantized Models
These are GGUF quantized versions of the Anima model, optimized for local inference on low-RAM devices using stable-diffusion.cpp.
Available Formats
- anima-preview-Q4_K.gguf : Smallest footprint. Best for Low RAM.
- anima-preview-Q5_K.gguf : Balanced size and quality.
- anima-preview-Q6_K.gguf : High quality, requires slightly more memory.
How to Use
Ensure you are using the latest version of stable-diffusion.cpp (Anima support is integrated into the main branch).
./bin/sd-cli \
--diffusion-model /path/to/anima-preview-Q4_K.gguf \
--llm /path/to/qwen_3_06b_base.safetensors \
--vae /path/to/qwen_image_vae.safetensors \
--vae-tiling \
--steps 30 \
--cfg-scale 4 \
-W 1024 -H 1024 \
-p "a cute blue cat in a park" \
-o output.png
(Note: The --vae-tiling flag is recommended to prevent out-of-memory errors during the image decoding phase).
Acknowledgments and Credits
- Original Model: circlestone-labs.
- Engine: The stable-diffusion.cpp project for making local AI accessible.
- Implementation: Special thanks to GitHub contributor @rmatif for his work on the implementation of Anima support.
- License: These files are subject to the circlestone-labs-non-commercial-license.
- Downloads last month
- 92
Hardware compatibility
Log In to add your hardware
6-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for JusteLeo/Anima-GGUF
Base model
circlestone-labs/Anima