Update README.md
Browse files
README.md
CHANGED
|
@@ -24,20 +24,20 @@ widget:
|
|
| 24 |
## self-hosted api
|
| 25 |
- run it with `gguf-connector`; activate the backend in console/terminal by
|
| 26 |
```
|
| 27 |
-
ggc
|
| 28 |
```
|
| 29 |
- choose your model* file
|
| 30 |
>
|
| 31 |
>GGUF available. Select which one to use:
|
| 32 |
>
|
| 33 |
-
>1.
|
| 34 |
-
>2.
|
| 35 |
>
|
| 36 |
>Enter your choice (1 to 2): _
|
| 37 |
>
|
| 38 |
-
*accept
|
| 39 |
|
| 40 |
-
- or opt
|
| 41 |
```
|
| 42 |
ggc w7
|
| 43 |
```
|
|
@@ -50,22 +50,22 @@ ggc w7
|
|
| 50 |
>
|
| 51 |
>Enter your choice (1 to 2): _
|
| 52 |
>
|
| 53 |
-
*as lumina is
|
| 54 |
|
| 55 |
-
- or opt
|
| 56 |
```
|
| 57 |
-
ggc
|
| 58 |
```
|
| 59 |
- choose your model* file
|
| 60 |
>
|
| 61 |
>GGUF available. Select which one to use:
|
| 62 |
>
|
| 63 |
-
>1.
|
| 64 |
-
>2.
|
| 65 |
>
|
| 66 |
>Enter your choice (1 to 2): _
|
| 67 |
>
|
| 68 |
-
*accept
|
| 69 |
|
| 70 |

|
| 71 |
|
|
|
|
| 24 |
## self-hosted api
|
| 25 |
- run it with `gguf-connector`; activate the backend in console/terminal by
|
| 26 |
```
|
| 27 |
+
ggc w8
|
| 28 |
```
|
| 29 |
- choose your model* file
|
| 30 |
>
|
| 31 |
>GGUF available. Select which one to use:
|
| 32 |
>
|
| 33 |
+
>1. sd3.5-2b-lite-iq4_nl.gguf [[1.74GB](https://huggingface.co/calcuis/sd3.5-lite-gguf/blob/main/sd3.5-2b-lite-iq4_nl.gguf)]
|
| 34 |
+
>2. sd3.5-2b-lite-mxfp4_moe.gguf [[2.86GB](https://huggingface.co/calcuis/sd3.5-lite-gguf/blob/main/sd3.5-2b-lite-mxfp4_moe.gguf)]
|
| 35 |
>
|
| 36 |
>Enter your choice (1 to 2): _
|
| 37 |
>
|
| 38 |
+
*accept sd3.5 2b model gguf recently, this will give you the fastest experience for even low tier gpu
|
| 39 |
|
| 40 |
+
- or opt fastapi **lumina** connector
|
| 41 |
```
|
| 42 |
ggc w7
|
| 43 |
```
|
|
|
|
| 50 |
>
|
| 51 |
>Enter your choice (1 to 2): _
|
| 52 |
>
|
| 53 |
+
*as lumina is no lite version recently, might need to increase the step to around 25 for better output
|
| 54 |
|
| 55 |
+
- or opt fastapi **flux** connector
|
| 56 |
```
|
| 57 |
+
ggc w6
|
| 58 |
```
|
| 59 |
- choose your model* file
|
| 60 |
>
|
| 61 |
>GGUF available. Select which one to use:
|
| 62 |
>
|
| 63 |
+
>1. flux-dev-lite-q2_k.gguf [[4.08GB](https://huggingface.co/calcuis/krea-gguf/blob/main/flux-dev-lite-q2_k.gguf)]
|
| 64 |
+
>2. flux-krea-lite-q2_k.gguf [[4.08GB](https://huggingface.co/calcuis/krea-gguf/blob/main/flux-krea-lite-q2_k.gguf)]
|
| 65 |
>
|
| 66 |
>Enter your choice (1 to 2): _
|
| 67 |
>
|
| 68 |
+
*accept any flux model gguf, lite is recommended for saving loading time
|
| 69 |
|
| 70 |

|
| 71 |
|