File size: 194 Bytes
4d35814 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
---
base_model:
- {base_model}
---
# {model_name} GGUF
Recommended way to run this model:
```sh
llama-server -hf {namespace}/{model_name}-GGUF -c 0 -fa
```
Then, access http://localhost:8080
|