Upload README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,70 @@
|
|
| 1 |
---
|
| 2 |
license: creativeml-openrail-m
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: creativeml-openrail-m
|
| 3 |
+
tags:
|
| 4 |
+
- coreml
|
| 5 |
+
- stable-diffusion
|
| 6 |
+
- text-to-image
|
| 7 |
---
|
| 8 |
+
# Core ML Converted Model:
|
| 9 |
+
|
| 10 |
+
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
|
| 11 |
+
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
|
| 12 |
+
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
|
| 13 |
+
- `original` version is only compatible with `CPU & GPU` option.
|
| 14 |
+
- Custom resolution versions are tagged accordingly.
|
| 15 |
+
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
|
| 16 |
+
- This model was converted with a `vae-encoder` for use with `image2image`.
|
| 17 |
+
- This model is `fp16`.
|
| 18 |
+
- Descriptions are posted as-is from original model source.
|
| 19 |
+
- Not all features and/or results may be available in `CoreML` format.
|
| 20 |
+
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
|
| 21 |
+
- This model does not include a `safety checker` (for NSFW content).
|
| 22 |
+
- This model can be used with ControlNet.
|
| 23 |
+
|
| 24 |
+
<br>
|
| 25 |
+
|
| 26 |
+
# westernAnimation_v1_cn:
|
| 27 |
+
Source(s): [CivitAI](https://civitai.com/models/86546/western-animation-diffusion)<br>
|
| 28 |
+
|
| 29 |
+
## Western Animation Diffusion
|
| 30 |
+
Comicbook and Western Animation Style Model
|
| 31 |
+
|
| 32 |
+
Do you like what I do? Consider supporting me on [**Patreon**](https://www.patreon.com/Lykon275) or feel free to [**buy me a coffee**](https://snipfeed.co/lykon).
|
| 33 |
+
|
| 34 |
+
A ❤️, a kind comment or a review is greatly appreciated.
|
| 35 |
+
|
| 36 |
+
## Purpose of this model
|
| 37 |
+
|
| 38 |
+
Train character loras where the dataset is mostly made of cartoon screencaps or comicbooks, allowing less style transfer and less overfitting.
|
| 39 |
+
|
| 40 |
+
Add variety to mixes.
|
| 41 |
+
|
| 42 |
+
Have an alternative to anime models when it comes to western stuff.
|
| 43 |
+
|
| 44 |
+
NOT to be used with style loras. Also NOT for style lora training.
|
| 45 |
+
|
| 46 |
+
## Suggested settings
|
| 47 |
+
|
| 48 |
+
Set the ETA Noise Seed Delta (ENSD) to 31337
|
| 49 |
+
|
| 50 |
+
Set CLIP Skip to 2
|
| 51 |
+
|
| 52 |
+
DISABLE face restore. It's terrible, never use it
|
| 53 |
+
|
| 54 |
+
Use negative prompts and embeddings that don't ruin the style
|
| 55 |
+
|
| 56 |
+
Use AnimeVideo or Foolhardy as upscalers in highres fix
|
| 57 |
+
|
| 58 |
+
Use ADetailer for far away shots or full body images to avoid blurred faces
|
| 59 |
+
|
| 60 |
+
## Brief history
|
| 61 |
+
|
| 62 |
+
This was requested by a supporter and I also wanted to see if I was capable of doing it. It was a funny little project.<br><br>
|
| 63 |
+
|
| 64 |
+

|
| 65 |
+
|
| 66 |
+

|
| 67 |
+
|
| 68 |
+
,.jpeg)
|
| 69 |
+
|
| 70 |
+

|