Text-to-Image
Diffusers
Safetensors
stable-diffusion
stable-diffusion-diffusers
controlnet
diffusers-training
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline
controlnet = ControlNetModel.from_pretrained("borisfeldcomet/model_out")
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1-base", controlnet=controlnet
)controlnet-borisfeldcomet/model_out
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning. You can find some example images below.
prompt: High-quality close-up dslr photo of man wearing a hat with trees in the background
prompt: Girl smiling, professional dslr photograph, dark background, studio lights, high quality
prompt: Portrait of a clown face, oil on canvas, bittersweet expression

Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- 5
Model tree for borisfeldcomet/model_out
Base model
stabilityai/stable-diffusion-2-1-base