Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,15 @@ widget:
|
|
| 8 |
- text: a raccoon in a hat
|
| 9 |
output:
|
| 10 |
url: https://raw.githubusercontent.com/calcuis/gguf-pack/master/w8f.png
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
- text: a dog walking in a cyber city with joy
|
| 12 |
output:
|
| 13 |
url: https://raw.githubusercontent.com/calcuis/gguf-pack/master/w8e.png
|
|
@@ -15,6 +24,35 @@ widget:
|
|
| 15 |
## self-hosted api
|
| 16 |
- run it with `gguf-connector`; activate the backend in console/terminal by
|
| 17 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
ggc w8
|
| 19 |
```
|
| 20 |
- choose your model file
|
|
|
|
| 8 |
- text: a raccoon in a hat
|
| 9 |
output:
|
| 10 |
url: https://raw.githubusercontent.com/calcuis/gguf-pack/master/w8f.png
|
| 11 |
+
- text: a raccoon in a hat
|
| 12 |
+
output:
|
| 13 |
+
url: https://raw.githubusercontent.com/calcuis/gguf-pack/master/w6a.png
|
| 14 |
+
- text: a dog walking in a cyber city with joy
|
| 15 |
+
output:
|
| 16 |
+
url: https://raw.githubusercontent.com/calcuis/gguf-pack/master/w6b.png
|
| 17 |
+
- text: a dog walking in a cyber city with joy
|
| 18 |
+
output:
|
| 19 |
+
url: https://raw.githubusercontent.com/calcuis/gguf-pack/master/w6c.png
|
| 20 |
- text: a dog walking in a cyber city with joy
|
| 21 |
output:
|
| 22 |
url: https://raw.githubusercontent.com/calcuis/gguf-pack/master/w8e.png
|
|
|
|
| 24 |
## self-hosted api
|
| 25 |
- run it with `gguf-connector`; activate the backend in console/terminal by
|
| 26 |
```
|
| 27 |
+
ggc w6
|
| 28 |
+
```
|
| 29 |
+
- choose your model file
|
| 30 |
+
>
|
| 31 |
+
>GGUF available. Select which one to use:
|
| 32 |
+
>
|
| 33 |
+
>1. flux-dev-lite-q2_k.gguf [[4.08GB](https://huggingface.co/calcuis/krea-gguf/blob/main/flux-dev-lite-q2_k.gguf)]
|
| 34 |
+
>2. flux-krea-lite-q2_k.gguf [[4.08GB](https://huggingface.co/calcuis/krea-gguf/blob/main/flux-krea-lite-q2_k.gguf)]
|
| 35 |
+
>
|
| 36 |
+
>Enter your choice (1 to 2): _
|
| 37 |
+
>
|
| 38 |
+
|
| 39 |
+
- or opt api lumina connector
|
| 40 |
+
```
|
| 41 |
+
ggc w7
|
| 42 |
+
```
|
| 43 |
+
- choose your model file
|
| 44 |
+
>
|
| 45 |
+
>GGUF available. Select which one to use:
|
| 46 |
+
>
|
| 47 |
+
>1. lumina2-q4_0.gguf [[1.47GB](https://huggingface.co/calcuis/lumina-gguf/blob/main/lumina2-q4_0.gguf)]
|
| 48 |
+
>2. lumina2-q8_0.gguf [[2.86GB](https://huggingface.co/calcuis/lumina-gguf/blob/main/lumina2-q8_0.gguf)]
|
| 49 |
+
>
|
| 50 |
+
>Enter your choice (1 to 2): _
|
| 51 |
+
>
|
| 52 |
+
* as lumina is not a lite model, might need to increase the step to around 25 for better output
|
| 53 |
+
|
| 54 |
+
- or opt api sd3.5 connector
|
| 55 |
+
```
|
| 56 |
ggc w8
|
| 57 |
```
|
| 58 |
- choose your model file
|