Instructions to use ttj/flex-diffusion-2-1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use ttj/flex-diffusion-2-1 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("ttj/flex-diffusion-2-1", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Work that builds upon yours
#1
by bghira - opened
Hello Jonathan, I was greatly appreciated of all the info you shared on your model and how you created it. I followed some of the suggestions you had and continued training 2.1-v as a low-noise model with high quality, high resolution outputs.
My base resolution is 1024x1024, and it supports up to 1536x1024 (and 1024x1536) landscape and portrait modes.
https://huggingface.co/ptx0/pseudo-flex-base
Again, thanks for the great starting point.