--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ inference: true tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet - diffusers-training - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet - diffusers-training --- # controlnet-Nawatix/out_multihmr_tmp These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning. You can find some example images below. prompt: acrobatics, two people in gymnastics outfits performing on the floor ![images_0)](./images_0.png) prompt: cosplay, a woman in a blue top and blue skirt holding a stick ![images_1)](./images_1.png) prompt: dance, a couple dancing in a ballroom on a stage ![images_2)](./images_2.png) prompt: drama, the cast of the show 'the great gatsby' ![images_3)](./images_3.png) prompt: movie, a man in a red and white costume smiling ![images_4)](./images_4.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]