Instructions to use Bedovyy/Anima-INT8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusion Single File
How to use Bedovyy/Anima-INT8 with Diffusion Single File:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
Regarding the loading of the model
Hello, I'd like to take the liberty of asking. If I use the latest INT8 (W8A8) model loading node for loading, how should I set the model_type? Can you provide a screenshot for reference? Thank you.
The latest ComfyUI-Flux2-INT8 has on-the-fly option, and if you enable it, the model you selected will be quantized on-the-fly as model_type.
In other word, you don't need to set model_type (keep it as default - Flux2 afaik) when you use prequantized model - just disable on-the-fly option.
It doesn't work! Can you provide examples?
You can use int8 model (Tensorwide) model with custom_node https://github.com/BobJohnson24/ComfyUI-Flux2-INT8
If you want to use int8rowwise model, You should check the custom node is in Quip branch.
~/Servers/ComfyUI/custom_nodes/ComfyUI-Flux2-INT8$ git branch
* Quip
main
Thanks
