Update README.md
Browse files
README.md
CHANGED
|
@@ -3,4 +3,72 @@ tags:
|
|
| 3 |
- gguf-connector
|
| 4 |
---
|
| 5 |
## chat
|
| 6 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
- gguf-connector
|
| 4 |
---
|
| 5 |
## chat
|
| 6 |
+
- gpt-like dialogue interaction flow
|
| 7 |
+
- simple but amazing multi-agent plus multi-modal implementation
|
| 8 |
+
- prepare your llm model (replaceable; can be serverless api endpoint)
|
| 9 |
+
- prepare your multimedia model(s), i.e., image, video (replaceable as well)
|
| 10 |
+
- call the specific agent/model by adding @ symbol ahead (tag the name like any social media)
|
| 11 |
+
|
| 12 |
+
## frontend (static webpage or localhost)
|
| 13 |
+
- https://chat.gguf.org
|
| 14 |
+
|
| 15 |
+
## backend (serverless api or localhost)
|
| 16 |
+
- run it with `gguf-connector`
|
| 17 |
+
- activate the backend(s) in console/terminal
|
| 18 |
+
- 1) llm chat model selection
|
| 19 |
+
```
|
| 20 |
+
ggc e4
|
| 21 |
+
```
|
| 22 |
+
>
|
| 23 |
+
>GGUF available. Select which one to use:
|
| 24 |
+
>
|
| 25 |
+
>1. llm-q4_0.gguf <<<<<<<<<< opt this one first
|
| 26 |
+
>2. picture-iq4_xs.gguf (image model example)
|
| 27 |
+
>3. video-iq4_nl.gguf (video model example)
|
| 28 |
+
>
|
| 29 |
+
>Enter your choice (1 to 3): _
|
| 30 |
+
|
| 31 |
+
- 2) picture model (opt the second one above)
|
| 32 |
+
```
|
| 33 |
+
ggc w8
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
- 3) video model (opt the third one above)
|
| 37 |
+
```
|
| 38 |
+
ggc e5
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
- make sure your endpoint(s) dosen't break by double checking
|
| 42 |
+
- since `ggc w8` or/and `ggc e5` will create a .py backend file to your current directory, it might trigger the uvicorn relaunch if you pull everything in the same directory; if you keep those .py files, you could execute `uvicorn backend:app --reload --port 8000` or/and `uvicorn backend5:app --reload --port 8005` instead for the next launch
|
| 43 |
+
|
| 44 |
+
## how it works?
|
| 45 |
+
- if you ask anything, i.e., just to say `hi`; everybody (llm agent(s)) will response
|
| 46 |
+

|
| 47 |
+
|
| 48 |
+
- you could tag a specific agent by @ for single response (see below)
|
| 49 |
+

|
| 50 |
+
|
| 51 |
+
- for functional agent(s), you should always call with tag @
|
| 52 |
+
- if you wanna call image agent/model, type `@image` first
|
| 53 |
+

|
| 54 |
+
|
| 55 |
+
- then image agent will work for you like example below
|
| 56 |
+

|
| 57 |
+
|
| 58 |
+
- for video agent, in this case, you should prompt a picture (drag and drop) with text instruction like below
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+
- then video agent will work for you like example below
|
| 62 |
+

|
| 63 |
+
|
| 64 |
+
## more settings
|
| 65 |
+
- check the `Settings` on top right corner
|
| 66 |
+
- you should be able to:
|
| 67 |
+
- change/reset the particular api/endpoint(s)
|
| 68 |
+
- for multimedia model(s)
|
| 69 |
+
- adjust the parameters for image and/or video agent/model(s); i.e., sampling rate (step), length (fps/frame), etc.
|
| 70 |
+
- for llm (text response model)
|
| 71 |
+
- add/delete agent(s)
|
| 72 |
+
- assign/disable vision for your agent(s), but it based on the model you opt (with vision or not)
|
| 73 |
+
|
| 74 |
+

|