Wellness AI
Collection
5 items • Updated • 1
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf AdityaKothari/WellnessAI-7B-5-bit:Q5_K_M# Run inference directly in the terminal:
llama-cli -hf AdityaKothari/WellnessAI-7B-5-bit:Q5_K_M# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf AdityaKothari/WellnessAI-7B-5-bit:Q5_K_M# Run inference directly in the terminal:
./llama-cli -hf AdityaKothari/WellnessAI-7B-5-bit:Q5_K_Mgit clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf AdityaKothari/WellnessAI-7B-5-bit:Q5_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf AdityaKothari/WellnessAI-7B-5-bit:Q5_K_Mdocker model run hf.co/AdityaKothari/WellnessAI-7B-5-bit:Q5_K_MThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Usage
from transformers import AutoModel
model = AutoModel.from_pretrained("AdityaKothari/WellnessAI-7B-5-bit")
5-bit
Base model
mistralai/Mistral-7B-v0.3
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf AdityaKothari/WellnessAI-7B-5-bit:Q5_K_M# Run inference directly in the terminal: llama-cli -hf AdityaKothari/WellnessAI-7B-5-bit:Q5_K_M