|
|
--- |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- gguf |
|
|
- llama.cpp |
|
|
- qwen |
|
|
- coder |
|
|
- quantized |
|
|
- q6_k |
|
|
- code |
|
|
--- |
|
|
|
|
|
# Qwen3-Coder-0.6B – GGUF (q6_k) |
|
|
|
|
|
This repository contains a **GGUF-quantized** version of **Qwen3-Coder-0.6B**, optimized for local inference using **llama.cpp-compatible runtimes**. |
|
|
|
|
|
## Model details |
|
|
- **Base model:** Qwen3-Coder-0.6B |
|
|
- **Quantization:** q6_k |
|
|
- **Format:** GGUF |
|
|
- **Use case:** Code generation, code completion, lightweight coding tasks |
|
|
|
|
|
## Compatibility |
|
|
Tested with: |
|
|
- llama.cpp |
|
|
- LM Studio |
|
|
- text-generation-webui |
|
|
- koboldcpp |
|
|
|
|
|
## Usage (llama.cpp) |
|
|
```bash |
|
|
./llama-cli -m qwen3-coder-0.6b-q6_k.gguf -p "Write a Python function to reverse a list" |