calcuis commited on
Commit
913e55c
·
verified ·
1 Parent(s): 0de06ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -24,20 +24,20 @@ widget:
24
  ## self-hosted api
25
  - run it with `gguf-connector`; activate the backend in console/terminal by
26
  ```
27
- ggc w6
28
  ```
29
  - choose your model* file
30
  >
31
  >GGUF available. Select which one to use:
32
  >
33
- >1. flux-dev-lite-q2_k.gguf [[4.08GB](https://huggingface.co/calcuis/krea-gguf/blob/main/flux-dev-lite-q2_k.gguf)]
34
- >2. flux-krea-lite-q2_k.gguf [[4.08GB](https://huggingface.co/calcuis/krea-gguf/blob/main/flux-krea-lite-q2_k.gguf)]
35
  >
36
  >Enter your choice (1 to 2): _
37
  >
38
- *accept any flux model gguf
39
 
40
- - or opt api lumina connector
41
  ```
42
  ggc w7
43
  ```
@@ -50,22 +50,22 @@ ggc w7
50
  >
51
  >Enter your choice (1 to 2): _
52
  >
53
- *as lumina is not a lite model, might need to increase the step to around 25 for better output
54
 
55
- - or opt api sd3.5 connector
56
  ```
57
- ggc w8
58
  ```
59
  - choose your model* file
60
  >
61
  >GGUF available. Select which one to use:
62
  >
63
- >1. sd3.5-2b-lite-iq4_nl.gguf [[1.74GB](https://huggingface.co/calcuis/sd3.5-lite-gguf/blob/main/sd3.5-2b-lite-iq4_nl.gguf)]
64
- >2. sd3.5-2b-lite-mxfp4_moe.gguf [[2.86GB](https://huggingface.co/calcuis/sd3.5-lite-gguf/blob/main/sd3.5-2b-lite-mxfp4_moe.gguf)]
65
  >
66
  >Enter your choice (1 to 2): _
67
  >
68
- *accept 2b model recently
69
 
70
  ![screenshot](https://raw.githubusercontent.com/calcuis/gguf-pack/master/w8a.png)
71
 
 
24
  ## self-hosted api
25
  - run it with `gguf-connector`; activate the backend in console/terminal by
26
  ```
27
+ ggc w8
28
  ```
29
  - choose your model* file
30
  >
31
  >GGUF available. Select which one to use:
32
  >
33
+ >1. sd3.5-2b-lite-iq4_nl.gguf [[1.74GB](https://huggingface.co/calcuis/sd3.5-lite-gguf/blob/main/sd3.5-2b-lite-iq4_nl.gguf)]
34
+ >2. sd3.5-2b-lite-mxfp4_moe.gguf [[2.86GB](https://huggingface.co/calcuis/sd3.5-lite-gguf/blob/main/sd3.5-2b-lite-mxfp4_moe.gguf)]
35
  >
36
  >Enter your choice (1 to 2): _
37
  >
38
+ *accept sd3.5 2b model gguf recently, this will give you the fastest experience for even low tier gpu
39
 
40
+ - or opt fastapi **lumina** connector
41
  ```
42
  ggc w7
43
  ```
 
50
  >
51
  >Enter your choice (1 to 2): _
52
  >
53
+ *as lumina is no lite version recently, might need to increase the step to around 25 for better output
54
 
55
+ - or opt fastapi **flux** connector
56
  ```
57
+ ggc w6
58
  ```
59
  - choose your model* file
60
  >
61
  >GGUF available. Select which one to use:
62
  >
63
+ >1. flux-dev-lite-q2_k.gguf [[4.08GB](https://huggingface.co/calcuis/krea-gguf/blob/main/flux-dev-lite-q2_k.gguf)]
64
+ >2. flux-krea-lite-q2_k.gguf [[4.08GB](https://huggingface.co/calcuis/krea-gguf/blob/main/flux-krea-lite-q2_k.gguf)]
65
  >
66
  >Enter your choice (1 to 2): _
67
  >
68
+ *accept any flux model gguf, lite is recommended for saving loading time
69
 
70
  ![screenshot](https://raw.githubusercontent.com/calcuis/gguf-pack/master/w8a.png)
71