Text-to-Image
Diffusers
Safetensors
stable-diffusion
stable-diffusion-diffusers
controlnet
diffusers-training
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline
controlnet = ControlNetModel.from_pretrained("Jieya/model_out_canny_captioned")
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet
)controlnet-Jieya/model_out_canny_captioned
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images below.
prompt: a black and white silhouette of a tree with no leaves
prompt: a snowflake on a black background with a white border
prompt: a close up of a spiral shell with a white background

Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- 2
Model tree for Jieya/model_out_canny_captioned
Base model
runwayml/stable-diffusion-v1-5