freddm commited on
Commit
28a9418
·
verified ·
1 Parent(s): 58debd0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +147 -0
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ license_link: https://huggingface.co/Qwen/Qwen3-Coder-Next-GGUF/blob/main/LICENSE
5
+ pipeline_tag: text-generation
6
+ ---
7
+
8
+ # Qwen3-Coder-Next
9
+
10
+
11
+ ## Highlights
12
+
13
+ Today, we're announcing **Qwen3-Coder-Next**, an open-weight language model designed specifically for coding agents and local development. It features the following key enhancements:
14
+
15
+ - **Super Efficient with Significant Performance**: With only 3B activated parameters (80B total parameters), it achieves performance comparable to models with 10–20x more active parameters, making it highly cost-effective for agent deployment.
16
+ - **Advanced Agentic Capabilities**: Through an elaborate training recipe, it excels at long-horizon reasoning, complex tool usage, and recovery from execution failures, ensuring robust performance in dynamic coding tasks.
17
+ - **Versatile Integration with Real-World IDE**: Its 256k context length, combined with adaptability to various scaffold templates, enables seamless integration with different CLI/IDE platforms (e.g., Claude Code, Qwen Code, Qoder, Kilo, Trae, Cline, etc.), supporting diverse development environments.
18
+
19
+ ![image/jpeg](https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen3-Coder-Next/benchmarks.png)
20
+
21
+ ![image/jpeg](https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen3-Coder-Next/swebench_pro.png)
22
+
23
+ ## Model Overview
24
+
25
+ **Qwen3-Coder-Next** has the following features:
26
+ - Type: Causal Language Models
27
+ - Training Stage: Pretraining & Post-training
28
+ - Number of Parameters: 80B in total and 3B activated
29
+ - Number of Parameters (Non-Embedding): 79B
30
+ - Hidden Dimension: 2048
31
+ - Number of Layers: 48
32
+ - Hybrid Layout: 12 \* (3 \* (Gated DeltaNet -> MoE) -> 1 \* (Gated Attention -> MoE))
33
+ - Gated Attention:
34
+ - Number of Attention Heads: 16 for Q and 2 for KV
35
+ - Head Dimension: 256
36
+ - Rotary Position Embedding Dimension: 64
37
+ - Gated DeltaNet:
38
+ - Number of Linear Attention Heads: 32 for V and 16 for QK
39
+ - Head Dimension: 128
40
+ - Mixture of Experts:
41
+ - Number of Experts: 512
42
+ - Number of Activated Experts: 10
43
+ - Number of Shared Experts: 1
44
+ - Expert Intermediate Dimension: 512
45
+ - Context Length: 262,144 natively
46
+
47
+ **NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**
48
+
49
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwen.ai/blog?id=qwen3-coder-next), [GitHub](https://github.com/QwenLM/Qwen3-Coder), and [Documentation](https://qwen.readthedocs.io/en/latest/).
50
+
51
+
52
+ ## Quickstart
53
+
54
+ ### llama.cpp
55
+
56
+ Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/run_locally/llama.cpp.html) for more usage guide.
57
+
58
+ We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp.
59
+ In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
60
+
61
+ ```shell
62
+ huggingface-cli download Qwen/Qwen3-Coder-Next-GGUF --include "Qwen3-Coder-Next-Q5_K_M/*"
63
+ ```
64
+
65
+ ```shell
66
+ ./llama-cli -m ./Qwen3-Coder-Next-Q5_K_M/Qwen3-Coder-Next-00001-of-00004.gguf --jinja -ngl 99 -fa on -sm row --temp 1.0 --top-k 40 --top-p 0.95 --min-p 0 -c 40960 -n 32768 --no-context-shift
67
+ ```
68
+
69
+ ## Processing Long Texts
70
+
71
+ Qwen3 natively supports context lengths of up to 262,144 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
72
+
73
+ To enable YARN in ``llama.cpp``:
74
+
75
+ ```shell
76
+ ./llama-cli ... -c 1010000 --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 262144
77
+ ```
78
+
79
+
80
+ ## Agentic Coding
81
+
82
+ Qwen3-Coder-Next excels in tool calling capabilities.
83
+
84
+ You can simply define or use any tools as following example.
85
+ ```python
86
+ # Your tool implementation
87
+ def square_the_number(num: float) -> dict:
88
+ return num ** 2
89
+
90
+ # Define Tools
91
+ tools=[
92
+ {
93
+ "type":"function",
94
+ "function":{
95
+ "name": "square_the_number",
96
+ "description": "output the square of the number.",
97
+ "parameters": {
98
+ "type": "object",
99
+ "required": ["input_num"],
100
+ "properties": {
101
+ 'input_num': {
102
+ 'type': 'number',
103
+ 'description': 'input_num is a number that will be squared'
104
+ }
105
+ },
106
+ }
107
+ }
108
+ }
109
+ ]
110
+
111
+ from openai import OpenAI
112
+ # Define LLM
113
+ client = OpenAI(
114
+ # Use a custom endpoint compatible with OpenAI API
115
+ base_url='http://localhost:8000/v1', # api_base
116
+ api_key="EMPTY"
117
+ )
118
+
119
+ messages = [{'role': 'user', 'content': 'square the number 1024'}]
120
+
121
+ completion = client.chat.completions.create(
122
+ messages=messages,
123
+ model="Qwen3-Coder-Next",
124
+ max_tokens=65536,
125
+ tools=tools,
126
+ )
127
+
128
+ print(completion.choices[0])
129
+ ```
130
+
131
+ ## Best Practices
132
+
133
+ To achieve optimal performance, we recommend the following sampling parameters: `temperature=1.0`, `top_p=0.95`, `top_k=40`.
134
+
135
+
136
+ ## Citation
137
+
138
+ If you find our work helpful, feel free to give us a cite.
139
+
140
+ ```
141
+ @techreport{qwen_qwen3_coder_next_tech_report,
142
+ title = {Qwen3-Coder-Next Technical Report},
143
+ author = {{Qwen Team}},
144
+ url = {https://github.com/QwenLM/Qwen3-Coder/blob/main/qwen3_coder_next_tech_report.pdf},
145
+ note = {Accessed: 2026-02-03}
146
+ }
147
+ ```