Add paper link and sample usage for Fill-in-the-Middle (FIM)
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -7,6 +7,9 @@ pipeline_tag: text-generation
|
|
| 7 |
|
| 8 |
# Qwen3-Coder-Next-Base
|
| 9 |
|
|
|
|
|
|
|
|
|
|
| 10 |
## Highlights
|
| 11 |
|
| 12 |
Today, we're announcing **Qwen3-Coder-Next-Base**, an open-weight language model designed specifically for coding agents and local development. It features the following key enhancements:
|
|
@@ -45,6 +48,43 @@ Today, we're announcing **Qwen3-Coder-Next-Base**, an open-weight language model
|
|
| 45 |
|
| 46 |
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwen.ai/blog?id=qwen3-coder-next), [GitHub](https://github.com/QwenLM/Qwen3-Coder), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
| 47 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
## Best Practices
|
| 49 |
|
| 50 |
To achieve optimal performance, we recommend the following sampling parameters: `temperature=1.0`, `top_p=0.95`, `top_k=40`.
|
|
@@ -54,7 +94,7 @@ To achieve optimal performance, we recommend the following sampling parameters:
|
|
| 54 |
|
| 55 |
If you find our work helpful, feel free to give us a cite.
|
| 56 |
|
| 57 |
-
```
|
| 58 |
@techreport{qwen_qwen3_coder_next_tech_report,
|
| 59 |
title = {Qwen3-Coder-Next Technical Report},
|
| 60 |
author = {{Qwen Team}},
|
|
|
|
| 7 |
|
| 8 |
# Qwen3-Coder-Next-Base
|
| 9 |
|
| 10 |
+
## Introduction
|
| 11 |
+
**Qwen3-Coder-Next-Base** is an open-weight language model designed specifically for coding agents and local development. It is the base version of the 80B parameter model that activates only 3B parameters during inference, as described in the [Qwen3-Coder-Next Technical Report](https://huggingface.co/papers/2603.00729).
|
| 12 |
+
|
| 13 |
## Highlights
|
| 14 |
|
| 15 |
Today, we're announcing **Qwen3-Coder-Next-Base**, an open-weight language model designed specifically for coding agents and local development. It features the following key enhancements:
|
|
|
|
| 48 |
|
| 49 |
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwen.ai/blog?id=qwen3-coder-next), [GitHub](https://github.com/QwenLM/Qwen3-Coder), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
| 50 |
|
| 51 |
+
## Sample Usage
|
| 52 |
+
|
| 53 |
+
### Fill in the middle with Qwen3-Coder
|
| 54 |
+
|
| 55 |
+
The code insertion task, also referred to as the "fill-in-the-middle" challenge, requires the insertion of code segments in a manner that bridges the gaps within a given code context.
|
| 56 |
+
|
| 57 |
+
```python
|
| 58 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 59 |
+
|
| 60 |
+
model_id = "Qwen/Qwen3-Coder-Next-Base"
|
| 61 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 62 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto").eval()
|
| 63 |
+
|
| 64 |
+
input_text = """<|fim_prefix|>def quicksort(arr):
|
| 65 |
+
if len(arr) <= 1:
|
| 66 |
+
return arr
|
| 67 |
+
pivot = arr[len(arr) // 2]
|
| 68 |
+
<|fim_suffix|>
|
| 69 |
+
middle = [x for x in arr if x == pivot]
|
| 70 |
+
right = [x for x in arr if x > pivot]
|
| 71 |
+
return quicksort(left) + middle + quicksort(right)<|fim_middle|>"""
|
| 72 |
+
|
| 73 |
+
model_inputs = tokenizer([input_text], return_tensors="pt").to(model.device)
|
| 74 |
+
|
| 75 |
+
# Use `max_new_tokens` to control the maximum output length.
|
| 76 |
+
# FIM specific special tokens:
|
| 77 |
+
eos_token_ids = [151659, 151661, 151662, 151663, 151664, 151643, 151645]
|
| 78 |
+
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=False, eos_token_id=eos_token_ids)[0]
|
| 79 |
+
|
| 80 |
+
# The generated_ids include prompt_ids, we only need to decode the tokens after prompt_ids.
|
| 81 |
+
output_text = tokenizer.decode(generated_ids[len(model_inputs.input_ids[0]):], skip_special_tokens=True)
|
| 82 |
+
|
| 83 |
+
print(f"Prompt: {input_text}
|
| 84 |
+
|
| 85 |
+
Generated text: {output_text}")
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
## Best Practices
|
| 89 |
|
| 90 |
To achieve optimal performance, we recommend the following sampling parameters: `temperature=1.0`, `top_p=0.95`, `top_k=40`.
|
|
|
|
| 94 |
|
| 95 |
If you find our work helpful, feel free to give us a cite.
|
| 96 |
|
| 97 |
+
```bibtex
|
| 98 |
@techreport{qwen_qwen3_coder_next_tech_report,
|
| 99 |
title = {Qwen3-Coder-Next Technical Report},
|
| 100 |
author = {{Qwen Team}},
|