How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf llmware/bonchon:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf llmware/bonchon:Q4_K_MUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf llmware/bonchon:Q4_K_M# Run inference directly in the terminal:
./llama-cli -hf llmware/bonchon:Q4_K_MBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf llmware/bonchon:Q4_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf llmware/bonchon:Q4_K_MUse Docker
docker model run hf.co/llmware/bonchon:Q4_K_MQuick Links
This repository includes some of our favorite bonchon ('side dishes' in Korean).
We currently include in this repository several of our favorite GGUF files from TheBloke, including four of our favorite 7B chat models, in Q4_K_M GGUF files.
This repository is Public, but intended primarily for use in conjunction with other llmware models, datasets and libraries.
Please note specific licensing information and reference for files included in the repository:
- HuggingFaceH4/Zephyr-7B-GGUF - MIT License - original repository link
- Teknium/OpenHermes-2.5-Mistral-7B-GGUF - Apache 2.0 License - original repository link
- Llama2-Chat-7B-GGUF - Llama2 License - original repository link
- Starling-7B-GGUF - CC-BY-NC-4.0 License - Non-Commercial - original repository link
- EleutherAI/Llema-7B-GGUF - Apache 2.0 License - original repository link
Please also see TheBloke for more information on GGUF.
- Downloads last month
- 112
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf llmware/bonchon:Q4_K_M# Run inference directly in the terminal: llama-cli -hf llmware/bonchon:Q4_K_M