|
|
--- |
|
|
base_model: stabilityai/stable-diffusion-xl-base-1.0 |
|
|
library_name: diffusers |
|
|
license: openrail++ |
|
|
inference: true |
|
|
tags: |
|
|
- stable-diffusion-xl |
|
|
- stable-diffusion-xl-diffusers |
|
|
- text-to-image |
|
|
- diffusers |
|
|
- controlnet |
|
|
- pbr |
|
|
- material-generation |
|
|
- diffusers-training |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the training script had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
|
|
|
# controlnet-HenghuiB/test_output |
|
|
|
|
|
These are ControlNet weights trained on `stabilityai/stable-diffusion-xl-base-1.0` for PBR material generation from height maps. |
|
|
The model generates a 4-channel output, corresponding to a 3-channel BaseColor map and a 1-channel Roughness map. |
|
|
You can find some example images below. For each prompt, the grid shows the input height map, followed by the generated BaseColor map(s) and then the generated Roughness map(s). |
|
|
|
|
|
**Prompt:** `a photorealistic, highly detailed photograph of a lunar surface material` |
|
|
 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
#### How to use |
|
|
|
|
|
```python |
|
|
# TODO: add an example code snippet for running this diffusion pipeline |
|
|
``` |
|
|
|
|
|
#### Limitations and bias |
|
|
|
|
|
[TODO: provide examples of latent issues and potential remediations] |
|
|
|
|
|
## Training details |
|
|
|
|
|
[TODO: describe the data used to train the model] |