GGUF
conversational
How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf LoneStriker/DeepMagic-Coder-7b-GGUF:
# Run inference directly in the terminal:
llama-cli -hf LoneStriker/DeepMagic-Coder-7b-GGUF:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf LoneStriker/DeepMagic-Coder-7b-GGUF:
# Run inference directly in the terminal:
llama-cli -hf LoneStriker/DeepMagic-Coder-7b-GGUF:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf LoneStriker/DeepMagic-Coder-7b-GGUF:
# Run inference directly in the terminal:
./llama-cli -hf LoneStriker/DeepMagic-Coder-7b-GGUF:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf LoneStriker/DeepMagic-Coder-7b-GGUF:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf LoneStriker/DeepMagic-Coder-7b-GGUF:
Use Docker
docker model run hf.co/LoneStriker/DeepMagic-Coder-7b-GGUF:
Quick Links

DeepMagic-Coder-7b

Alternate version:

image/jpeg

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).

This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:

Task Arithmetic:

Computes "task vectors" for each model by subtracting a base model. 
Merges the task vectors linearly and adds back the base. 
Works great for models that were fine tuned from a common ancestor. 
Also a super useful mental framework for several of the more involved 
merge methods.

The original models used in this merge can be found here:

The Merge was created using Mergekit and the paremeters can be found bellow:

models:
  - model: deepseek-ai_deepseek-coder-6.7b-instruct
    parameters:
      weight: 1
  - model: ise-uiuc_Magicoder-S-DS-6.7B
    parameters:
      weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
  normalize: true
  int8_mask: true
dtype: float16
Downloads last month
77
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support