Instructions to use CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m", filename="CWC-Mistral-Nemo-12B-v2-GGUF-q4_k_m-health-nutrition-natural-medicine.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M # Run inference directly in the terminal: llama-cli -hf CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M # Run inference directly in the terminal: llama-cli -hf CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M
Use Docker
docker model run hf.co/CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m with Ollama:
ollama run hf.co/CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M
- Unsloth Studio new
How to use CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m to start chatting
- Pi new
How to use CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m with Docker Model Runner:
docker model run hf.co/CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M
- Lemonade
How to use CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M
Run and chat with the model
lemonade run user.CWC-Mistral-Nemo-12B-V2-q4_k_m-Q4_K_M
List all available models
lemonade list
Model created by non-profit CWC (Consumer Wellness Center) with strong emphasis on curated training data in the realms of nutrition, natural health, wellness, disease prevention, phytochemistry and similar topics.
Built on open-mistral-nemo 12B, with credit to Mistral, this model achieves very strong vector db alterations through a variety of SFT techniques to overcome the pro-pharma bias found in nearly all base models.
The curated data set for training consists of over 100 million pages of content selected through algorithmic classification, including science papers, transcripts, book text, article text and more. No user-generated comments or chat data were used.
This allows the model to achieve RAG-like domain knowledge without using RAG. Out of the box, with default system prompts, it is outperforming far larger and more complex models with RAG layers.
Recommended context window of 8192. Flash attention supported.
This model is provided by CWC, and its development was led by Mike Adams.
Additional models and quants will also be published.
- Downloads last month
- 435
4-bit
docker model run hf.co/CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m:Q4_K_M