How to use from the
Use from the
Diffusion Single File library
# No code snippets available yet for this library.

# To use this model, check the repository files and the library's documentation.

# Want to help? PRs adding snippets are welcome at:
# https://github.com/huggingface/huggingface.js

You can find workflow with samples in the workflow_assets folder.

The workflow contains the information and links needed to get started with using this model.

This fp8_scaled model is faster than the official one released by ComfyOrg when used with the Loader in the workflow.

The custom node that loads the model in the workflow is necessary to obtain fastest inference on lower VRAM GPU.

If you want to use mxfp8, then you need to use silveroxides/ComfyUI-QuantOps custom node and custom custom comfy-kitchen Repo with instructions until official support is merged.

Use BobJohnson24/ComfyUI-Flux2-INT8 for int8 tensorwise model.

Workflow

Sample

Downloads last month
15,317
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for silveroxides/FLUX.2-dev-fp8_scaled

Finetuned
(26)
this model