Instructions to use kraina/map_diffusion_lora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use kraina/map_diffusion_lora with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("kraina/map_diffusion_lora") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
End of training
Browse files- README.md +4 -0
- image_0.png +0 -0
- image_1.png +0 -0
- image_2.png +0 -0
- image_3.png +0 -0
- pytorch_lora_weights.bin +1 -1
README.md
CHANGED
|
@@ -14,4 +14,8 @@ inference: true
|
|
| 14 |
# LoRA text2image fine-tuning - mprzymus/map_diffusion_lora
|
| 15 |
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the mprzymus/text2tile_large dataset. You can find some example images in the following.
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
|
|
|
| 14 |
# LoRA text2image fine-tuning - mprzymus/map_diffusion_lora
|
| 15 |
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the mprzymus/text2tile_large dataset. You can find some example images in the following.
|
| 16 |
|
| 17 |
+

|
| 18 |
+

|
| 19 |
+

|
| 20 |
+

|
| 21 |
|
image_0.png
CHANGED
|
|
image_1.png
CHANGED
|
|
image_2.png
CHANGED
|
|
image_3.png
CHANGED
|
|
pytorch_lora_weights.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 3287771
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:270df41125fc3aee7c13b59b480f271d7497c7365ab06c78c082f6870b8bcdb2
|
| 3 |
size 3287771
|