import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("AlonzoLeeeooo/shape-guided-controlnet", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]A re-implementation of ControlNet with shape masks.
Model Weights Structure
shape-guided-controlnet/
βββ annotators <----- Model weights of the shape mask annotator (`U2-Net`)
βββ u2net.pth
βββ shape-guided-controlnet <----- Model weights of the trained ControlNet with shape masks
βββ config.json
βββ diffusion_pytorch_model.safetensors
βββ stable-diffusion-v1.5 <----- Model weights of Stable Diffusion v1.5
βββ feature_extractor
βββ scheduler
βββ text_encoder
βββ tokenizer
βββ unet
βββ vae
βββ model_index.json
βββ v1-5-pruned.safetensors
βββ v1-inference.yaml
Results
Here are some example results generated by the trained model:
"A red bag"
"A sport car"
"A blue truck"
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support