Text-to-Image
Diffusers
Safetensors
stable-diffusion
stable-diffusion-diffusers
controlnet
diffusers-training
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline
controlnet = ControlNetModel.from_pretrained("Jieya/model_out_canny_captioned_2")
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet
)controlnet-Jieya/model_out_canny_captioned_2
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images below.
prompt: a view up into the canopy of a tree in a forest
prompt: circle
prompt: a square quilt pattern
prompt: a diamond icon on a white background stock photo

Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- 2
Model tree for Jieya/model_out_canny_captioned_2
Base model
runwayml/stable-diffusion-v1-5