ProDem commited on
Commit
a0ce29d
·
1 Parent(s): bd89131

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -149
README.md DELETED
@@ -1,149 +0,0 @@
1
- # Stable Diffusion web UI
2
- A browser interface based on Gradio library for Stable Diffusion.
3
-
4
- ![](txt2img_Screenshot.png)
5
-
6
- Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) wiki page for extra scripts developed by users.
7
-
8
- ## Features
9
- [Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
10
- - Original txt2img and img2img modes
11
- - One click install and run script (but you still must install python and git)
12
- - Outpainting
13
- - Inpainting
14
- - Color Sketch
15
- - Prompt Matrix
16
- - Stable Diffusion Upscale
17
- - Attention, specify parts of text that the model should pay more attention to
18
- - a man in a ((tuxedo)) - will pay more attention to tuxedo
19
- - a man in a (tuxedo:1.21) - alternative syntax
20
- - select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user)
21
- - Loopback, run img2img processing multiple times
22
- - X/Y plot, a way to draw a 2 dimensional plot of images with different parameters
23
- - Textual Inversion
24
- - have as many embeddings as you want and use any names you like for them
25
- - use multiple embeddings with different numbers of vectors per token
26
- - works with half precision floating point numbers
27
- - train embeddings on 8GB (also reports of 6GB working)
28
- - Extras tab with:
29
- - GFPGAN, neural network that fixes faces
30
- - CodeFormer, face restoration tool as an alternative to GFPGAN
31
- - RealESRGAN, neural network upscaler
32
- - ESRGAN, neural network upscaler with a lot of third party models
33
- - SwinIR and Swin2SR([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
34
- - LDSR, Latent diffusion super resolution upscaling
35
- - Resizing aspect ratio options
36
- - Sampling method selection
37
- - Adjust sampler eta values (noise multiplier)
38
- - More advanced noise setting options
39
- - Interrupt processing at any time
40
- - 4GB video card support (also reports of 2GB working)
41
- - Correct seeds for batches
42
- - Live prompt token length validation
43
- - Generation parameters
44
- - parameters you used to generate images are saved with that image
45
- - in PNG chunks for PNG, in EXIF for JPEG
46
- - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
47
- - can be disabled in settings
48
- - drag and drop an image/text-parameters to promptbox
49
- - Read Generation Parameters Button, loads parameters in promptbox to UI
50
- - Settings page
51
- - Running arbitrary python code from UI (must run with --allow-code to enable)
52
- - Mouseover hints for most UI elements
53
- - Possible to change defaults/mix/max/step values for UI elements via text config
54
- - Random artist button
55
- - Tiling support, a checkbox to create images that can be tiled like textures
56
- - Progress bar and live image generation preview
57
- - Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
58
- - Styles, a way to save part of prompt and easily apply them via dropdown later
59
- - Variations, a way to generate same image but with tiny differences
60
- - Seed resizing, a way to generate same image but at slightly different resolution
61
- - CLIP interrogator, a button that tries to guess prompt from an image
62
- - Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
63
- - Batch Processing, process a group of files using img2img
64
- - Img2img Alternative, reverse Euler method of cross attention control
65
- - Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
66
- - Reloading checkpoints on the fly
67
- - Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
68
- - [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
69
- - [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
70
- - separate prompts using uppercase `AND`
71
- - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
72
- - No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
73
- - DeepDanbooru integration, creates danbooru style tags for anime prompts (add --deepdanbooru to commandline args)
74
- - [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add --xformers to commandline args)
75
- - History tab: view, direct and delete images conveniently within the UI
76
- - Generate forever option
77
- - Training tab
78
- - hypernetworks and embeddings options
79
- - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
80
- - Clip skip
81
- - Use Hypernetworks
82
- - Use VAEs
83
- - Estimated completion time in progress bar
84
- - API
85
- - Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML.
86
- - Aesthetic Gradients, a way to generate images with a specific aesthetic by using clip images embds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
87
-
88
-
89
- ## Installation and Running
90
- Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
91
-
92
- Alternatively, use online services (like Google Colab):
93
-
94
- - [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
95
-
96
- ### Automatic Installation on Windows
97
- 1. Install [Python 3.10.6](https://www.python.org/downloads/windows/), checking "Add Python to PATH"
98
- 2. Install [git](https://git-scm.com/download/win).
99
- 3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
100
- 4. Place `model.ckpt` in the `models` directory (see [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) for where to get it).
101
- 5. _*(Optional)*_ Place `GFPGANv1.4.pth` in the base directory, alongside `webui.py` (see [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) for where to get it).
102
- 6. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
103
-
104
- ### Automatic Installation on Linux
105
- 1. Install the dependencies:
106
- ```bash
107
- # Debian-based:
108
- sudo apt install wget git python3 python3-venv
109
- # Red Hat-based:
110
- sudo dnf install wget git python3
111
- # Arch-based:
112
- sudo pacman -S wget git python3
113
- ```
114
- 2. To install in `/home/$(whoami)/stable-diffusion-webui/`, run:
115
- ```bash
116
- bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
117
- ```
118
-
119
- ### Installation on Apple Silicon
120
-
121
- Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
122
-
123
- ## Contributing
124
- Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
125
-
126
- ## Documentation
127
- The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
128
-
129
- ## Credits
130
- - Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
131
- - k-diffusion - https://github.com/crowsonkb/k-diffusion.git
132
- - GFPGAN - https://github.com/TencentARC/GFPGAN.git
133
- - CodeFormer - https://github.com/sczhou/CodeFormer
134
- - ESRGAN - https://github.com/xinntao/ESRGAN
135
- - SwinIR - https://github.com/JingyunLiang/SwinIR
136
- - Swin2SR - https://github.com/mv-lab/swin2sr
137
- - LDSR - https://github.com/Hafiidz/latent-diffusion
138
- - Ideas for optimizations - https://github.com/basujindal/stable-diffusion
139
- - Doggettx - Cross Attention layer optimization - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
140
- - InvokeAI, lstein - Cross Attention layer optimization - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
141
- - Rinon Gal - Textual Inversion - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
142
- - Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
143
- - Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
144
- - CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
145
- - Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
146
- - xformers - https://github.com/facebookresearch/xformers
147
- - DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
148
- - Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
149
- - (You)