| language: | |
| - en | |
| base_model: | |
| - black-forest-labs/FLUX.2-dev | |
| license: other | |
| license_name: flux-dev-non-commercial-license | |
| license_link: LICENSE.md | |
| extra_gated_prompt: >- | |
| By clicking "Agree", you agree to the [FLUX [dev] Non-Commercial License | |
| Agreement](https://huggingface.co/black-forest-labs/FLUX.2-dev/blob/main/LICENSE.txt) | |
| and acknowledge the [Acceptable Use | |
| Policy](https://bfl.ai/legal/usage-policy). | |
| tags: | |
| - image-generation | |
| - image-editing | |
| - flux | |
| - diffusion-single-file | |
| - comfyui | |
| - fp8_scaled | |
| pipeline_tag: image-to-image | |
| ## You can find workflow with samples in the [workflow_assets](https://huggingface.co/silveroxides/FLUX.2-dev-fp8_scaled/tree/main/workflow_assets) folder. | |
| ### The [workflow](https://huggingface.co/silveroxides/FLUX.2-dev-fp8_scaled/blob/main/workflow_assets/fp8_scaled_flux2_w_enhanced_prompting-workflow.json) contains the information and links needed to get started with using this model. | |
| ### This fp8_scaled model is faster than the official one released by ComfyOrg when used with the Loader in the workflow. | |
| ### The custom node that loads the model in the workflow is necessary to obtain the fastest inference on lower VRAM GPU. | |
|  | |
|  |