Image-to-Image
Diffusers
Safetensors
StableDiffusionControlNetPipeline
controlnet
stable-diffusion
satellite-imagery
osm
Instructions to use MVRL/VectorSynth-COSA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use MVRL/VectorSynth-COSA with Diffusers:
pip install -U diffusers transformers accelerate
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline controlnet = ControlNetModel.from_pretrained("MVRL/VectorSynth-COSA") pipe = StableDiffusionControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-2-1-base", controlnet=controlnet ) - Notebooks
- Google Colab
- Kaggle
Upload render_encoder/cosa-render_encoder.pth with huggingface_hub
Browse files
render_encoder/cosa-render_encoder.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8f9af6315721ca25fa0401e7b9cc7cab1f21c7ca4529788710c071213286a71d
|
| 3 |
+
size 12576
|