Image-to-Image
Diffusers
Safetensors
StableDiffusionControlNetPipeline
controlnet
stable-diffusion
satellite-imagery
osm
Instructions to use MVRL/VectorSynth-COSA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use MVRL/VectorSynth-COSA with Diffusers:
pip install -U diffusers transformers accelerate
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline controlnet = ControlNetModel.from_pretrained("MVRL/VectorSynth-COSA") pipe = StableDiffusionControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-2-1-base", controlnet=controlnet ) - Notebooks
- Google Colab
- Kaggle
Upload render_encoder/cosa-render_encoder.pth with huggingface_hub
Browse files
render_encoder/cosa-render_encoder.pth
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:abb4d405f0fb319363943275d57870b4a5318b173d16ff8d6a1373929d6ea5ac
|
| 3 |
+
size 10976
|