Instructions to use circlestone-labs/Anima with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusion Single File
How to use circlestone-labs/Anima with Diffusion Single File:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
Black image output in FP16 compute dtype.
Is it possible to inference with fp16 computedtype on ComfyUI, i'm getting black image? Otherwise genning with my old RTX pre-3000 GPU is very slow due to upcasting to fp32?
This flag worked for me: "--fp16-unet"
This flag worked for me: "--fp16-unet"
Weird, still black output.
"RuntimeWarning: invalid value encountered in cast
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))"
Have you tried '--force-upcast-attention'?
Remove any other attentions like SageAttn and then try.
Yeah, still black. FP16 doesn't work at all. If you're getting a proper image i guess your GPU supports bf16. And it's kind of crazy that not using GPU at all (--novram) is almost as slow as using non-bf16 GPU.
Someone made a patch for the model to work in fp16 for comfyui like an hour ago, with it the model is about 2x slower than sdxl.
Someone made a patch for the model to work in fp16 for comfyui like an hour ago, with it the model is about 2x slower than sdxl.
Any links?
EDIT: found it
https://civitai.com/models/2356447/anima-fp8