File size: 198 Bytes
31dd200
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
---

base_model:
- {base_model}
---

# {model_name} GGUF

Recommended way to run this model:

```sh
llama-server -hf {namespace}/{model_name}-GGUF
```

Then, access http://localhost:8080