|
|
--- |
|
|
base_model: appvoid/arco-chat-merged-3 |
|
|
library_name: transformers |
|
|
tags: |
|
|
- mergekit |
|
|
- merge |
|
|
- llama-cpp |
|
|
- gguf-my-repo |
|
|
--- |
|
|
|
|
|
# arco-chat |
|
|
**Model creator:** [appvoid](https://huggingface.co/appvoid)<br/> |
|
|
**GGUF quantization:** provided by [appvoid](https:/huggingface.co/appvoid) using `llama.cpp`<br/> |
|
|
## Special thanks |
|
|
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
|
|
## Use with Ollama |
|
|
```bash |
|
|
ollama run "hf.co/appvoid/arco-chat:<quantization>" |
|
|
``` |
|
|
## Use with LM Studio |
|
|
```bash |
|
|
lms load "appvoid/arco-chat" |
|
|
``` |
|
|
## Use with llama.cpp CLI |
|
|
```bash |
|
|
llama-cli --hf-repo "appvoid/arco-chat" --hf-file "arco-chat-F16.gguf" -p "The meaning to life and the universe is" |
|
|
``` |
|
|
## Use with llama.cpp Server: |
|
|
```bash |
|
|
llama-server --hf-repo "appvoid/arco-chat" --hf-file "arco-chat-F16.gguf" -c 4096 |
|
|
``` |
|
|
|