Text-to-Image
Diffusers
Safetensors
English
StableDiffusionPipeline
stable-diffusion
stable-diffusion-diffusers
Instructions to use wavymulder/portraitplus with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use wavymulder/portraitplus with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("wavymulder/portraitplus", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Can we use this model to train our photos? (Dreambooth)
#3
by ayushsomani - opened
Hello,
I want to know if it's possible to train our photos (some subject) on this model? just like we do with original stable diffusion.
Thanks.
Hello. I also wanted to ask this question and please share dreambooth train configurations as well plss!!
According to a reply on the the Reddit thread that lead me to discover this, it should work fine!
Thank you so much for sharing the link @Hexus
My pleasure, @ayushsomani !
For me the training results were less successful compared to training on Analog Diffusion which worked wonderfully. I used the exact same settings and set of photos but the likeness in the final renderings is not as good.