How to use from
Unsloth StudioInstall Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for appvoid/arco-chat to start chattingUsing HuggingFace Spaces for Unsloth
# No setup required# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for appvoid/arco-chat to start chattingQuick Links
arco-chat
Model creator: appvoid
GGUF quantization: provided by appvoid using llama.cpp
Special thanks
๐ Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.
Use with Ollama
ollama run "hf.co/appvoid/arco-chat:<quantization>"
Use with LM Studio
lms load "appvoid/arco-chat"
Use with llama.cpp CLI
llama-cli --hf-repo "appvoid/arco-chat" --hf-file "arco-chat-F16.gguf" -p "The meaning to life and the universe is"
Use with llama.cpp Server:
llama-server --hf-repo "appvoid/arco-chat" --hf-file "arco-chat-F16.gguf" -c 4096
- Downloads last month
- 3
Hardware compatibility
Log In to add your hardware
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for appvoid/arco-chat
Base model
appvoid/arco-chat-merged-3
Install Unsloth Studio (macOS, Linux, WSL)
# Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for appvoid/arco-chat to start chatting