Text-to-Image
Diffusers
StableDiffusionPipeline
stable-diffusion
sygil-diffusion
sygil-devs
finetune
stable-diffusion-1.5
Instructions to use Sygil/Sygil-Diffusion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Sygil/Sygil-Diffusion with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Sygil/Sygil-Diffusion", dtype=torch.bfloat16, device_map="cuda") prompt = "environment art, realistic" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Commit ·
685ab11
1
Parent(s): 75d30a7
Update README.md
Browse files
README.md
CHANGED
|
@@ -3,7 +3,7 @@ license: openrail++
|
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
- ja
|
| 6 |
-
-
|
| 7 |
tags:
|
| 8 |
- stable-diffusion
|
| 9 |
- sygil-diffusion
|
|
@@ -19,7 +19,8 @@ pipeline_tag: text-to-image
|
|
| 19 |
-----------------
|
| 20 |
This model is a Stable Diffusion v1.5 fine-tune trained on the [Imaginary Network Expanded Dataset](https://github.com/Sygil-Dev/INE-dataset).
|
| 21 |
It is an advanced version of Stable Diffusion and can generate nearly all kinds of images like humans, reflections, cities, architecture, fantasy, concepts arts, anime, manga, digital arts, landscapes, or nature views.
|
| 22 |
-
This model allows the user to have total control of the generation as they can use multiple tags and namespaces to control almost everything
|
|
|
|
| 23 |
|
| 24 |
**Note that the prompt engineering techniques is a bit different from other models and Stable Diffusion,
|
| 25 |
while you can still use normal prompts like in other Stable Diffusion modelsin order to get the best out of this model you will need to make use of tags and namespaces.
|
|
|
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
- ja
|
| 6 |
+
- zh
|
| 7 |
tags:
|
| 8 |
- stable-diffusion
|
| 9 |
- sygil-diffusion
|
|
|
|
| 19 |
-----------------
|
| 20 |
This model is a Stable Diffusion v1.5 fine-tune trained on the [Imaginary Network Expanded Dataset](https://github.com/Sygil-Dev/INE-dataset).
|
| 21 |
It is an advanced version of Stable Diffusion and can generate nearly all kinds of images like humans, reflections, cities, architecture, fantasy, concepts arts, anime, manga, digital arts, landscapes, or nature views.
|
| 22 |
+
This model allows the user to have total control of the generation as they can use multiple tags and namespaces to control almost everything
|
| 23 |
+
on the final result including image composition.
|
| 24 |
|
| 25 |
**Note that the prompt engineering techniques is a bit different from other models and Stable Diffusion,
|
| 26 |
while you can still use normal prompts like in other Stable Diffusion modelsin order to get the best out of this model you will need to make use of tags and namespaces.
|