File size: 7,418 Bytes
d081fe6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c23173d
 
 
 
 
 
 
d081fe6
 
 
 
 
 
 
e849a0b
 
 
17a9fd2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e849a0b
 
 
 
 
17a9fd2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e849a0b
d081fe6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
---
base_model: mlx-community/Qwen2.5-Coder-32B-Instruct-bf16
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- mlx
- TensorBlock
- GGUF
---

<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>

[![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co)
[![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2)
[![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock)
[![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock)


## mlx-community/Qwen2.5-Coder-32B-Instruct-bf16 - GGUF

This repo contains GGUF format model files for [mlx-community/Qwen2.5-Coder-32B-Instruct-bf16](https://huggingface.co/mlx-community/Qwen2.5-Coder-32B-Instruct-bf16).

The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).

## Our projects
<table border="1" cellspacing="0" cellpadding="10">
  <tr>
    <th colspan="2" style="font-size: 25px;">Forge</th>
  </tr>
  <tr>
    <th colspan="2">
      <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
    </th>
  </tr>
  <tr>
    <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
  </tr>
  <tr>
    <th colspan="2">
      <a href="https://github.com/TensorBlock/forge" target="_blank" style="
        display: inline-block;
        padding: 8px 16px;
        background-color: #FF7F50;
        color: white;
        text-decoration: none;
        border-radius: 6px;
        font-weight: bold;
        font-family: sans-serif;
      ">πŸš€ Try it now! πŸš€</a>
    </th>
  </tr>

  <tr>
    <th style="font-size: 25px;">Awesome MCP Servers</th>
    <th style="font-size: 25px;">TensorBlock Studio</th>
  </tr>
  <tr>
    <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
    <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
  </tr>
  <tr>
    <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
    <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
  </tr>
  <tr>
    <th>
      <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
        display: inline-block;
        padding: 8px 16px;
        background-color: #FF7F50;
        color: white;
        text-decoration: none;
        border-radius: 6px;
        font-weight: bold;
        font-family: sans-serif;
      ">πŸ‘€ See what we built πŸ‘€</a>
    </th>
    <th>
      <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
        display: inline-block;
        padding: 8px 16px;
        background-color: #FF7F50;
        color: white;
        text-decoration: none;
        border-radius: 6px;
        font-weight: bold;
        font-family: sans-serif;
      ">πŸ‘€ See what we built πŸ‘€</a>
    </th>
  </tr>
</table>
## Prompt template

```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```

## Model file specification

| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Qwen2.5-Coder-32B-Instruct-bf16-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-bf16-Q2_K.gguf) | Q2_K | 12.313 GB | smallest, significant quality loss - not recommended for most purposes |
| [Qwen2.5-Coder-32B-Instruct-bf16-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-bf16-Q3_K_S.gguf) | Q3_K_S | 14.392 GB | very small, high quality loss |
| [Qwen2.5-Coder-32B-Instruct-bf16-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-bf16-Q3_K_M.gguf) | Q3_K_M | 15.935 GB | very small, high quality loss |
| [Qwen2.5-Coder-32B-Instruct-bf16-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-bf16-Q3_K_L.gguf) | Q3_K_L | 17.247 GB | small, substantial quality loss |
| [Qwen2.5-Coder-32B-Instruct-bf16-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-bf16-Q4_0.gguf) | Q4_0 | 18.640 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen2.5-Coder-32B-Instruct-bf16-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-bf16-Q4_K_S.gguf) | Q4_K_S | 18.784 GB | small, greater quality loss |
| [Qwen2.5-Coder-32B-Instruct-bf16-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-bf16-Q4_K_M.gguf) | Q4_K_M | 19.851 GB | medium, balanced quality - recommended |
| [Qwen2.5-Coder-32B-Instruct-bf16-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-bf16-Q5_0.gguf) | Q5_0 | 22.638 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen2.5-Coder-32B-Instruct-bf16-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-bf16-Q5_K_S.gguf) | Q5_K_S | 22.638 GB | large, low quality loss - recommended |
| [Qwen2.5-Coder-32B-Instruct-bf16-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-bf16-Q5_K_M.gguf) | Q5_K_M | 23.262 GB | large, very low quality loss - recommended |
| [Qwen2.5-Coder-32B-Instruct-bf16-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-bf16-Q6_K.gguf) | Q6_K | 26.886 GB | very large, extremely low quality loss |
| [Qwen2.5-Coder-32B-Instruct-bf16-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-bf16-Q8_0.gguf) | Q8_0 | 34.821 GB | very large, extremely low quality loss - not recommended |


## Downloading instruction

### Command line

Firstly, install Huggingface Client

```shell
pip install -U "huggingface_hub[cli]"
```

Then, downoad the individual model file the a local directory

```shell
huggingface-cli download tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF --include "Qwen2.5-Coder-32B-Instruct-bf16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```

If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:

```shell
huggingface-cli download tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```