File size: 1,242 Bytes
a67e702
 
5909a22
a67e702
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
license: apache-2.0
base_model: Jackrong/Qwen3.5-4B-Python-Coder
language:
- en
pipeline_tag: text-generation
tags:
- gguf
- qwen
- qwen3.5
- code
- python
---

# Qwen3.5-4B-Python-Coder-GGUF

## Available Quantizations

The following quantization formats are available in this repository:

* **Q3_K_M:** Smallest size, heavily quantized. Good for very low RAM environments, but significant loss in coding accuracy.
* **Q4_K_M:** Recommended baseline. Excellent balance between file size, memory usage, and coding performance.
* **Q5_K_M:** Higher accuracy than Q4, slightly larger file size. 
* **Q6_K:** Very close to the original unquantized model's performance. Great if you have the RAM for it.
* **Q8_0:** Almost zero quality loss compared to the original 16-bit model, but largest file size and highest memory requirement.

## How to Run

You can run these models locally using [llama.cpp](https://github.com/ggerganov/llama.cpp) or compatible interfaces like LM Studio, Ollama, or text-generation-webui.

**Example using `llama.cpp` in the terminal:**

```bash
./main -m Qwen3.5-4B-Python-Coder-Q4_K_M.gguf -n 512 --color -i -cml -p "<|im_start|>user\nWrite a Python script to scrape a website.<|im_end|>\n<|im_start|>assistant\n"