Instructions to use dataautogpt3/OpenDalleV1.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use dataautogpt3/OpenDalleV1.1 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("dataautogpt3/OpenDalleV1.1", dtype=torch.bfloat16, device_map="cuda") prompt = "black fluffy gorgeous dangerous cat animal creature, large orange eyes, big fluffy ears, piercing gaze, full moon, dark ambiance, best quality, extremely detailed" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Make faster
#22
by luklue - opened
Hello,
i have a server with rtx 3090 and it takes quite a long time like 50 seconds to produce.
Any way to make it faster. what do i need to put in diffuser command.
Thanks
I suggest using a DPO-Turbo-Lora from here:
https://civitai.com/models/237775?modelVersionId=268054
You will have to play around with settings to get a good image, like CFG at 1-3 and steps at 10-20 + whatever sampler works for you.
It also works with the LCM LoRA https://huggingface.co/latent-consistency/lcm-lora-sdxl
In A1111, I installed the AnimateDiff extension to get the LCM sampler. Speeds things up considerably and you can still great results with as low as 4 steps, anything over 8 tends to overcook with the LCM.