Instructions to use ostris/OpenFLUX.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use ostris/OpenFLUX.1 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("ostris/OpenFLUX.1", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Recommended inference parameter values
Hi, thanks for training this, it is really interesting. I tried it with a number of different inference parameter value settings but the model didn't appear to exceed the quality of Flux Schnell in my testing, can you recommend ideal inference param settings? Thanks again.
Did you set up the custom pipeline, enabling inference with real CFG + Negative Prompting? There's a custom pipeline code example in the files here. Link:
https://huggingface.co/ostris/OpenFLUX.1/blob/main/open_flux_pipeline.py
There's also another, more minimal version for fast inference (and without negative prompting access) over at https://huggingface.co/spaces/KingNish/Realtime-FLUX/blob/main/custom_pipeline.py
Yes, I did. The issue was not using the Lora. I'm not clear why the decision was made to release this with a Lora that wasn't merged with the base weights.