Upload README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,69 @@
|
|
| 1 |
---
|
| 2 |
license: creativeml-openrail-m
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: creativeml-openrail-m
|
| 3 |
+
tags:
|
| 4 |
+
- coreml
|
| 5 |
+
- stable-diffusion
|
| 6 |
+
- text-to-image
|
| 7 |
---
|
| 8 |
+
# Core ML Converted Model:
|
| 9 |
+
|
| 10 |
+
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
|
| 11 |
+
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
|
| 12 |
+
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
|
| 13 |
+
- `original` version is only compatible with `CPU & GPU` option.
|
| 14 |
+
- Custom resolution versions are tagged accordingly.
|
| 15 |
+
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
|
| 16 |
+
- This model was converted with a `vae-encoder` for use with `image2image`.
|
| 17 |
+
- This model is `fp16`.
|
| 18 |
+
- Descriptions are posted as-is from original model source.
|
| 19 |
+
- Not all features and/or results may be available in `CoreML` format.
|
| 20 |
+
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
|
| 21 |
+
- This model does not include a `safety checker` (for NSFW content).<br>
|
| 22 |
+
|
| 23 |
+
# lofi-V2:
|
| 24 |
+
Source(s): [Hugging Face](https://huggingface.co/SG161222/Realistic_Vision_V2.0) - [CivitAI](https://civitai.com/models/4201/realistic-vision-v20)<br>
|
| 25 |
+
|
| 26 |
+
**L.O.F.I: Limitless Originality Free from Interference**
|
| 27 |
+
|
| 28 |
+
🧐 No special face alignment
|
| 29 |
+
|
| 30 |
+
🚀 Improve line details
|
| 31 |
+
|
| 32 |
+
🚀 Improve prompt understanding
|
| 33 |
+
|
| 34 |
+
Based on the LOFI-v1 model, finetuned 80,000 steps / 300 epochs
|
| 35 |
+
|
| 36 |
+
📷more camera concept
|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
|
| 40 |
+
🎨exact palette
|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
|
| 44 |
+
<br>**Prompt suggestions**
|
| 45 |
+
|
| 46 |
+
Since the text-encoder used is enough trained version, do not use a very high attention to control weight, which will cause some wrong drawing, it is recommended that all attention weights be no higher than 1.2
|
| 47 |
+
|
| 48 |
+
If there is no special composition requirement, there is no need for a lot of negative prompts, such as "missing hands", which may affect human body drawing, and DeepNegative is enough for negative prompts
|
| 49 |
+
|
| 50 |
+
It is strongly recommended to use hires.fix to generate, Recommended parameters:
|
| 51 |
+
|
| 52 |
+
- Final output: 512*768
|
| 53 |
+
- Steps: 20, Sampler: Euler a, CFG scale: 7
|
| 54 |
+
- Size: 256x384, Denoising strength: 0.75
|
| 55 |
+
- Hires upscale: 2, Hires steps: 40
|
| 56 |
+
- Hires upscaler: Latent (bicubic antialiased)
|
| 57 |
+
|
| 58 |
+
Most of the sample images are generated with Hires. fix
|
| 59 |
+
|
| 60 |
+
Note: that if you use Hires. fix, you may not be able to reproduce the image with the same set of parameters in WebUI, because Hires. fix introduces double randomness.<br><br>
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
<br>
|
| 64 |
+
|
| 65 |
+
<br>
|
| 66 |
+
|
| 67 |
+
br>
|
| 68 |
+
|
| 69 |
+

|