Instructions to use ostris/Flex.1-alpha with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use ostris/Flex.1-alpha with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("ostris/Flex.1-alpha", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
What Are the Best Generation Parameters for Good Results + Training/Finetuning/Dreambooth Questions
Hello, I've loaded the model in ComfyUI, but my generation results have been rather mediocre. To be frank, I am more comfortable with WebUI Forge, and I have used Flux with Forge in the past with good results.
First question: Is this model compatible with Forge? I wasn't able to generate successfully with it loaded.
Second: What would the best/optimal settings/workflow be for a good result in ComfyUI?
Third: Can this model be trained in Kohya_ss, with Dreambooth, or is training a Lora in AI Toolkit the only method of training this model? The description prefaced the model is designed to be finetuned, but the only example of external training is Lora in AI Toolkit. Lora can be achieved with regular Flux; I thought the point of this model was its receptiveness to finetuning. I'd appreciate clarity.
Thanks.