Instructions to use black-forest-labs/FLUX.2-dev with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use black-forest-labs/FLUX.2-dev with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.2-dev", dtype=torch.bfloat16, device_map="cuda") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Diffusion Single File
How to use black-forest-labs/FLUX.2-dev with Diffusion Single File:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Inference
- Notebooks
- Google Colab
- Kaggle
Flux 1 Dev vs Flux 1 Krea Dev vs Flux 2 Dev Comparisions
could you try a few of two or more people having interactions involving hands? Just handshaking, nothing crazy. and just as a sanity check. i get sdxl levels of bad there (too many arms, six fingers ...) Comfyui default template with default settings, except using a gguf 6q version. . But it can not possibly be THAT bad, must be some technical problem with the gguf version or something.
little update: even on the official bfl page (flux2 dev AND pro) on replicate the anatomy issues are catastrophically bad.
Dang... nice comparison, and honestly, i noticed that they're all in FP8. This does cause slight precision loss... but I find it acceptable, and honestly shocking that you even got away with fp8 π€£. The model needs like... 32gb vram to run, and cannot go on a 16gb card easily (would need 4bit or 2bit).
According the result, i guess, align with their proposed R&D philosophy of "minimizing ai look".
I've built a new series of nodes that will compare any model, even video models against any combination (where compatible) VAE/CLIP/Sampler/Steps etc so I will be running a more extensive comparison for Flux2 against ZImage, Flux Krea, Flux 1 , Qwen Image, Wan/Hunyuan against a range of parameters.
I'll update this comment with results once I do and share the nodes for anyone to do more comprehensive comparisons.
I've built a new series of nodes that will compare any model, even video models against any combination (where compatible) VAE/CLIP/Sampler/Steps etc so I will be running a more extensive comparison for Flux2 against ZImage, Flux Krea, Flux 1 , Qwen Image, Wan/Hunyuan against a range of parameters.
I'll update this comment with results once I do and share the nodes for anyone to do more comprehensive comparisons.
Any conclusion?
Any conclusion?









