Reimu Hakurei
commited on
Commit
·
675c8ee
1
Parent(s):
d54a055
Update README
Browse files
README.md
CHANGED
|
@@ -11,23 +11,28 @@ inference: false
|
|
| 11 |
|
| 12 |
# waifu-diffusion - Diffusion for Weebs
|
| 13 |
|
| 14 |
-
waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
## Model Description
|
| 17 |
|
| 18 |
-
The model used for fine-tuning is [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent
|
| 19 |
|
| 20 |
-
The current model is
|
| 21 |
|
| 22 |
With [Textual Inversion](https://github.com/rinongal/textual_inversion), the embeddings for the text encoder has been trained to align more with anime-styled images, reducing excessive prompting.
|
| 23 |
|
| 24 |
## Training Data & Annotative Prompting
|
| 25 |
|
| 26 |
-
The data used for
|
|
|
|
|
|
|
| 27 |
|
| 28 |
## Downstream Uses
|
| 29 |
|
| 30 |
-
This model can be used for entertainment purposes and as a generative art assistant.
|
| 31 |
|
| 32 |
## Example Code
|
| 33 |
|
|
@@ -52,7 +57,9 @@ image.save("reimu_hakurei.png")
|
|
| 52 |
|
| 53 |
## Team Members and Acknowledgements
|
| 54 |
|
| 55 |
-
This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/).
|
|
|
|
|
|
|
| 56 |
|
| 57 |
- [Anthony Mercurio](https://github.com/harubaru)
|
| 58 |
- [Salt](https://github.com/sALTaccount/)
|
|
|
|
| 11 |
|
| 12 |
# waifu-diffusion - Diffusion for Weebs
|
| 13 |
|
| 14 |
+
waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through [Textual Inversion](https://github.com/rinongal/textual_inversion).
|
| 15 |
+
|
| 16 |
+
<img src=https://cdn.discordapp.com/attachments/872361510133981234/1016022078635388979/unknown.png?3867929 width=30% height=30%>
|
| 17 |
+
<sub>Prompt: touhou 1girl komeiji_koishi portrait</sub>
|
| 18 |
|
| 19 |
## Model Description
|
| 20 |
|
| 21 |
+
The model originally used for fine-tuning is [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
|
| 22 |
|
| 23 |
+
The current model is based from [Yasu Seno](https://twitter.com/naclbbr)'s [TrinArt Stable Diffusion](https://huggingface.co/naclbit/trinart_stable_diffusion) which has been fine-tuned on 30,000 high-resolution manga/anime-style images for 3.5 epochs.
|
| 24 |
|
| 25 |
With [Textual Inversion](https://github.com/rinongal/textual_inversion), the embeddings for the text encoder has been trained to align more with anime-styled images, reducing excessive prompting.
|
| 26 |
|
| 27 |
## Training Data & Annotative Prompting
|
| 28 |
|
| 29 |
+
The data used for Textual Inversion has come from a random sample of 25k Danbooru images, which were then filtered based on [CLIP Aesthetic Scoring](https://github.com/christophschuhmann/improved-aesthetic-predictor) where only images with an aesthetic score greater than `6.0` were used.
|
| 30 |
+
|
| 31 |
+
Captions are Danbooru-style captions.
|
| 32 |
|
| 33 |
## Downstream Uses
|
| 34 |
|
| 35 |
+
This model can be used for entertainment purposes and as a generative art assistant.
|
| 36 |
|
| 37 |
## Example Code
|
| 38 |
|
|
|
|
| 57 |
|
| 58 |
## Team Members and Acknowledgements
|
| 59 |
|
| 60 |
+
This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/) and the author of the original finetuned model that this work was based upon, [Yasu Seno](https://twitter.com/naclbbr)!
|
| 61 |
+
|
| 62 |
+
Additionally, the methods presented in the [Textual Inversion](https://github.com/rinongal/textual_inversion) repo was an incredible help.
|
| 63 |
|
| 64 |
- [Anthony Mercurio](https://github.com/harubaru)
|
| 65 |
- [Salt](https://github.com/sALTaccount/)
|