cwm-Q2_K-GGUF / README.md
PsiPi's picture
Update README.md
be86c5c verified
metadata
language:
  - en
tags:
  - meta-ai
  - meta-pytorch
  - llama-cpp
license: fair-noncommercial-research-license
license-link: https://huggingface.co/facebook/fair-noncommercial-research-license
base_model: facebook/cwm

PsiPi/cwm-Q2_K-GGUF

Refer to the original model card for more details on the model.

Fitting on 24gb in LMStudio

  • Flash attention ENABLED
  • K Cache Quant type Q8_0
  • V Cache Quant type Q8_0
  • Layer Offload 64
  • Context Length ~50k

Fitting on 24gb in LMStudio @Q4_0

  • Flash attention ENABLED
  • K Cache Quant type Q4_0
  • V Cache Quant type Q4_0
  • Layer Offload 64
  • Context Length 131072

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo PsiPi/cwm-Q2_K-GGUF --hf-file cwm-q2_k.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo PsiPi/cwm-Q2_K-GGUF --hf-file cwm-q2_k.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo PsiPi/cwm-Q2_K-GGUF --hf-file cwm-q2_k.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo PsiPi/cwm-Q2_K-GGUF --hf-file cwm-q2_k.gguf -c 2048