Text-to-Image
Diffusers
Safetensors
stable-diffusion-xl
stable-diffusion-xl-diffusers
controlnet
diffusers-training
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline
controlnet = ControlNetModel.from_pretrained("dyamagishi/output")
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"cagliostrolab/animagine-xl-3.1", controlnet=controlnet
)controlnet-dyamagishi/output
These are controlnet weights trained on cagliostrolab/animagine-xl-3.1 with new type of conditioning. You can find some example images below.
prompt: outdoors, scenery, cloud, multiple_girls, sky, day, tree, grass, architecture, 2girls, blue_sky, building, standing, skirt, long_hair, mountain, east_asian_architecture, from_behind, castle, facing_away, black_skirt, school_uniform, pagoda, waterfall, white_shirt, white_hair, shirt, cloudy_sky, bag

Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- 3
Model tree for dyamagishi/output
Base model
stabilityai/stable-diffusion-xl-base-1.0 Finetuned
Linaqruf/animagine-xl-2.0 Finetuned
cagliostrolab/animagine-xl-3.0 Finetuned
cagliostrolab/animagine-xl-3.1