neuralvfx commited on
Commit
d360f29
·
verified ·
1 Parent(s): 36eec61

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ thumbnail: https://huggingface.co/neuralvfx/Z-Image-SAM-ControlNet/resolve/main/
16
  - This ControlNet is trained exclusively on images generated by [Segment Anything (SAM)](https://aidemos.meta.com/segment-anything/)
17
  - Base model used was [Tongyi-MAI/Z-Image](https://huggingface.co/Tongyi-MAI/Z-Image)
18
  - Uses SAM style images as input, outputs photorealistic images
19
- - Trained at 1024x1024 resolution
20
  - Trained on 220K segmented images from [laion2b-squareish-1536px](https://huggingface.co/datasets/opendiffusionai/laion2b-squareish-1536px)
21
  - Trained using this repo: [https://github.com/aigc-apps/VideoX-Fun](https://github.com/aigc-apps/VideoX-Fun)
22
 
 
16
  - This ControlNet is trained exclusively on images generated by [Segment Anything (SAM)](https://aidemos.meta.com/segment-anything/)
17
  - Base model used was [Tongyi-MAI/Z-Image](https://huggingface.co/Tongyi-MAI/Z-Image)
18
  - Uses SAM style images as input, outputs photorealistic images
19
+ - Trained at 1024x1024 resolution, inference works best at 1.5k and up
20
  - Trained on 220K segmented images from [laion2b-squareish-1536px](https://huggingface.co/datasets/opendiffusionai/laion2b-squareish-1536px)
21
  - Trained using this repo: [https://github.com/aigc-apps/VideoX-Fun](https://github.com/aigc-apps/VideoX-Fun)
22