Instructions to use Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf", filename="DeepSeek-Coder-V2-Instruct-bf16-00001-of-00011.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16 # Run inference directly in the terminal: llama-cli -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16 # Run inference directly in the terminal: llama-cli -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16 # Run inference directly in the terminal: ./llama-cli -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16 # Run inference directly in the terminal: ./build/bin/llama-cli -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16
Use Docker
docker model run hf.co/Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16
- LM Studio
- Jan
- Ollama
How to use Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf with Ollama:
ollama run hf.co/Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16
- Unsloth Studio new
How to use Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf to start chatting
- Docker Model Runner
How to use Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf with Docker Model Runner:
docker model run hf.co/Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16
- Lemonade
How to use Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16
Run and chat with the model
lemonade run user.deepseek-coder-v2-inst-cpu-optimized-gguf-BF16
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16# Run inference directly in the terminal:
llama-cli -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16# Run inference directly in the terminal:
./llama-cli -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16# Run inference directly in the terminal:
./build/bin/llama-cli -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16Use Docker
docker model run hf.co/Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16- This iq4xm one uses GGML TYPE IQ_4_XS 4bit in combination with q8_0 bit so it runs fast with minimal loss and takes advantage of int8 optimizations on most newer server cpus.
- While it required custom code to make, it is compatible with standard llama.cpp from github or just search nisten in lmstudio.
Custom quantizations of deepseek-coder-v2-instruct optimized for cpu inference.
This iq4xm one uses GGML TYPE IQ_4_XS 4bit in combination with q8_0 bit so it runs fast with minimal loss and takes advantage of int8 optimizations on most newer server cpus.
While it required custom code to make, it is compatible with standard llama.cpp from github or just search nisten in lmstudio.
The following 4bit version is the one I use myself, it gets 17tps on 64 arm cores.
You don't need to consolidate the files anymore, just point llama-cli to the first one and it'll handle the rest fine.
Then to run in commandline interactive mode (prompt.txt file is optional) just do:
./llama-cli --temp 0.4 -m deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf -c 32000 -co -cnv -i -f prompt.txt
deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf
deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf
deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf
deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf
To download the models MUCH faster on linux apt install aria2, on mac: brew install aria2
sudo apt install -y aria2
aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf
And for downloading the Q8_0 version converted in the most lossless way possible from hf bf16 download these:
aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00001-of-00006.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00001-of-00006.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00002-of-00006.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00002-of-00006.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00003-of-00006.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00003-of-00006.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00004-of-00006.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00004-of-00006.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00005-of-00006.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00005-of-00006.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00006-of-00006.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00006-of-00006.gguf
The use of DeepSeek-Coder-V2 Base/Instruct models is subject to the Model License. DeepSeek-Coder-V2 series (including Base and Instruct) supports commercial use. It's a permissive license that only restrict use for military purposes, harming minors or patent trolling.
Enjoy and remember to accelerate!
-Nisten
- Downloads last month
- 42
Model tree for Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf
Base model
deepseek-ai/DeepSeek-Coder-V2-Base
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16# Run inference directly in the terminal: llama-cli -hf Ocean82/deepseek-coder-v2-inst-cpu-optimized-gguf:BF16