Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -22,6 +22,9 @@ tags:
|
|
| 22 |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
| 23 |
<!-- header end -->
|
| 24 |
|
|
|
|
|
|
|
|
|
|
| 25 |
# FLUX.1-Redux-dev-GGUF
|
| 26 |
|
| 27 |
## Original Model
|
|
@@ -38,8 +41,8 @@ tags:
|
|
| 38 |
|
| 39 |
```bash
|
| 40 |
wasmedge --dir .:. sd-api-server.wasm \
|
| 41 |
-
--model-name flux1-
|
| 42 |
-
--diffusion-model flux1-
|
| 43 |
--vae ae.safetensors \
|
| 44 |
--clip-l clip_l.safetensors \
|
| 45 |
--t5xxl t5xxl-Q8_0.gguf
|
|
@@ -53,8 +56,8 @@ tags:
|
|
| 53 |
wasmedge --dir .:. \
|
| 54 |
--dir lora-models:lora-models \
|
| 55 |
sd-api-server.wasm \
|
| 56 |
-
--model-name flux1-
|
| 57 |
-
--diffusion-model flux1-
|
| 58 |
--vae ae.safetensors \
|
| 59 |
--clip-l clip_l.safetensors \
|
| 60 |
--t5xxl t5xxl-Q8_0.gguf \
|
|
@@ -67,26 +70,11 @@ tags:
|
|
| 67 |
|
| 68 |
| Name | Quant method | Bits | Size | Use case |
|
| 69 |
| ---- | ---- | ---- | ---- | ----- |
|
| 70 |
-
| [
|
| 71 |
-
| [flux1-
|
| 72 |
-
| [flux1-
|
| 73 |
-
| [flux1-
|
| 74 |
-
| [flux1-
|
| 75 |
-
| [flux1-
|
| 76 |
-
| [flux1-canny-dev-Q5_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q5_0.gguf) | Q5_0 | 5 | 8.40 GB | |
|
| 77 |
-
| [flux1-canny-dev-Q5_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q5_1.gguf) | Q5_1 | 5 | 9.14 GB | |
|
| 78 |
-
| [flux1-canny-dev-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q8_0.gguf) | Q8_0 | 8 | 12.6 GB | |
|
| 79 |
-
| [flux1-canny-dev.safetensors](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev.safetensors) | f16 | 16 | 23.8 GB | |
|
| 80 |
-
<!-- | [clip_l-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/clip_l-Q8_0.gguf) | Q8_0 | 8 | 131 MB | |
|
| 81 |
-
| [clip_l.safetensors](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/clip_l.safetensors) | f16 | 16 | 246 MB | |
|
| 82 |
-
| [t5xxl-Q2_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q2_K.gguf) | Q2_K | 2 | 1.61 GB | |
|
| 83 |
-
| [t5xxl-Q3_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q3_K.gguf) | Q3_K | 3 | 2.10 GB | |
|
| 84 |
-
| [t5xxl-Q4_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q4_0.gguf) | Q4_0 | 4 | 2.75 GB | |
|
| 85 |
-
| [t5xxl-Q4_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q4_1.gguf) | Q4_0 | 4 | 3.06 GB | |
|
| 86 |
-
| [t5xxl-Q4_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q4_K.gguf) | Q4_K | 4 | 2.75 GB | |
|
| 87 |
-
| [t5xxl-Q5_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q5_0.gguf) | Q5_0 | 5 | 3.36 GB | |
|
| 88 |
-
| [t5xxl-Q5_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q5_1.gguf) | Q5_1 | 5 | 3.67 GB | |
|
| 89 |
-
| [t5xxl-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q8_0.gguf) | Q8_0 | 8 | 5.20 GB | |
|
| 90 |
-
| [t5xxl_fp16.safetensors](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl_fp16.safetensors) | f16 | 16 | 9.79 GB | | -->
|
| 91 |
|
| 92 |
**Quantized with stable-diffusion.cpp `master-c3eeb669`.**
|
|
|
|
| 22 |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
| 23 |
<!-- header end -->
|
| 24 |
|
| 25 |
+
> [!CAUTION]
|
| 26 |
+
> T5, Clip, and ae are still not provided in the original model
|
| 27 |
+
|
| 28 |
# FLUX.1-Redux-dev-GGUF
|
| 29 |
|
| 30 |
## Original Model
|
|
|
|
| 41 |
|
| 42 |
```bash
|
| 43 |
wasmedge --dir .:. sd-api-server.wasm \
|
| 44 |
+
--model-name flux1-redux-dev \
|
| 45 |
+
--diffusion-model flux1-redux-dev-Q4_0.gguf \
|
| 46 |
--vae ae.safetensors \
|
| 47 |
--clip-l clip_l.safetensors \
|
| 48 |
--t5xxl t5xxl-Q8_0.gguf
|
|
|
|
| 56 |
wasmedge --dir .:. \
|
| 57 |
--dir lora-models:lora-models \
|
| 58 |
sd-api-server.wasm \
|
| 59 |
+
--model-name flux1-redux-dev \
|
| 60 |
+
--diffusion-model flux1-redux-dev-Q4_0.gguf \
|
| 61 |
--vae ae.safetensors \
|
| 62 |
--clip-l clip_l.safetensors \
|
| 63 |
--t5xxl t5xxl-Q8_0.gguf \
|
|
|
|
| 70 |
|
| 71 |
| Name | Quant method | Bits | Size | Use case |
|
| 72 |
| ---- | ---- | ---- | ---- | ----- |
|
| 73 |
+
| [flux1-redux-dev-Q4_0.gguf](https://huggingface.co/second-state/FLUX.1-Redux-dev-GGUF/blob/main/flux1-redux-dev-Q4_0.gguf) | Q4_0 | 4 | 36.3 MB | |
|
| 74 |
+
| [flux1-redux-dev-Q4_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-redux-dev-Q4_1.gguf) | Q4_1 | 4 | 40.4 MB | |
|
| 75 |
+
| [flux1-redux-dev-Q5_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-redux-dev-Q5_0.gguf) | Q5_0 | 5 | 44.4 MB | |
|
| 76 |
+
| [flux1-redux-dev-Q5_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-redux-dev-Q5_1.gguf) | Q5_1 | 5 | 48.4 MB | |
|
| 77 |
+
| [flux1-redux-dev-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-redux-dev-Q8_0.gguf) | Q8_0 | 8 | 68.6 MB | |
|
| 78 |
+
| [flux1-redux-dev.safetensors](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-redux-dev.safetensors) | f16 | 16 | 129 MB | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
|
| 80 |
**Quantized with stable-diffusion.cpp `master-c3eeb669`.**
|