ANIMA2-GGUF
These are GGUF quantized versions of the Anima 2 (Preview 2) model, optimized for local inference.
Compatibility
These models work with:
- ComfyUI (via the ComfyUI-GGUF node).
- stable-diffusion.cpp (CLI).
CLI Example
./bin/sd-cli \
--diffusion-model /path/to/anima-preview2_q4_k.gguf \
--llm /path/to/qwen_3_06b_base.safetensors \
--vae /path/to/qwen_image_vae.safetensors \
--vae-tiling \
--steps 30 \
--cfg-scale 4 \
-W 1024 -H 1024 \
-p "a cute blue cat in a park" \
-o output.png
Acknowledgments and Credits
- Original Model: circlestone-labs.
- Engine: Quantized using the stable-diffusion.cpp project.
- Implementation: Special thanks to GitHub contributor @rmatif for his work on the implementation of Anima support.
- License: These files are subject to the circlestone-labs-non-commercial-license.
- Downloads last month
- 298
Hardware compatibility
Log In to add your hardware
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for JusteLeo/Anima2-GGUF
Base model
circlestone-labs/Anima