Instructions to use lavinal712/sd-control-lora-segmentation with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use lavinal712/sd-control-lora-segmentation with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("lavinal712/sd-control-lora-segmentation", dtype=torch.bfloat16, device_map="cuda") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,4 +6,19 @@ pipeline_tag: image-to-image
|
|
| 6 |
tags:
|
| 7 |
- control-lora-v2
|
| 8 |
- stable-diffusion
|
| 9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
tags:
|
| 7 |
- control-lora-v2
|
| 8 |
- stable-diffusion
|
| 9 |
+
---
|
| 10 |
+
# Model Card for lavinal712/sd-control-lora-segmentation
|
| 11 |
+
|
| 12 |
+
## Model Description
|
| 13 |
+
|
| 14 |
+
This is controlnet weight trained on runwayml/stable-diffusion-v1-5 with segmentaion.
|
| 15 |
+
|
| 16 |
+
### Training
|
| 17 |
+
|
| 18 |
+
This model was trained using a Segmented dataset based on the SAM-LLaVA-Captions10M Dataset. Stable Diffusion v1.5 checkpoint was used as the base model for the controlnet.
|
| 19 |
+
|
| 20 |
+
- [SAM-LLAVA-55k](https://huggingface.co/datasets/unography/SAM-LLAVA-55k)
|
| 21 |
+
|
| 22 |
+
#### Training Method
|
| 23 |
+
|
| 24 |
+
- Train on [SAM-LLAVA-55k](https://huggingface.co/datasets/unography/SAM-LLAVA-55k) for 55000 steps with batch size of 4.
|