|
|
--- |
|
|
license: mit |
|
|
base_model: |
|
|
- ByteDance-Seed/Seed-Coder-8B-Base |
|
|
--- |
|
|
|
|
|
# Seed-Coder-8B-Reasoning |
|
|
|
|
|
## Introduction |
|
|
**Seed-Coder-8B-Reasoning** is an 8-billion-parameter model further optimized for **code reasoning**, **problem-solving**, and **algorithmic thinking** tasks. |
|
|
Built upon the strong base of Seed-Coder, it undergoes additional training in sandbox environments to significantly enhance its ability to tackle complex coding problems and competitions. It features: |
|
|
- Trained on a **massively curated corpus**, filtered using an **LLM-based method** to ensure high-quality real-world code, text-code alignment, and synthetic datasets. |
|
|
- **Sandbox fine-tuning** to specifically strengthen **multi-step reasoning**, **algorithm design**, and **competitive programming** capabilities. |
|
|
- Maintains **long-context handling** up to 32K tokens, enabling it to reason over extended problem descriptions and large input-output examples. |
|
|
|
|
|
<p align="center"> |
|
|
<img width="100%" src="imgs/seed-coder_intro_performance.jpg"> |
|
|
</p> |
|
|
|
|
|
## Model Downloads |
|
|
| Model Name | Length | Download | Notes | |
|
|
|---------------------------------------------------------|-----------|------------------------------------|-----------------------| |
|
|
| Seed-Coder-8B-Base | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) | Pretrained on our model-centric code data. | |
|
|
| Seed-Coder-8B-Instruct | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) | Instruction-tuned for alignment with user intent. | |
|
|
| 👉 **Seed-Coder-8B-Reasoning** | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning) | RL trained to boost reasoning capabilities. | |
|
|
|
|
|
|
|
|
## Requirements |
|
|
You will need to install the latest versions of `transformers` and `accelerate`: |
|
|
|
|
|
```bash |
|
|
pip install -U transformers accelerate |
|
|
``` |
|
|
|
|
|
## Quickstart |
|
|
|
|
|
Here is a simple example demonstrating how to load the model and perform code generation using the Hugging Face `pipeline` API: |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
import torch |
|
|
|
|
|
model_id = "ByteDance-Seed/Seed-Coder-8B-Reasoning" |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) |
|
|
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True) |
|
|
|
|
|
messages = [ |
|
|
{"role": "user", "content": "Write a quick sort algorithm."}, |
|
|
] |
|
|
|
|
|
input_ids = tokenizer.apply_chat_template( |
|
|
messages, |
|
|
tokenize=True, |
|
|
return_tensors="pt", |
|
|
add_generation_prompt=True, |
|
|
).to(model.device) |
|
|
|
|
|
outputs = model.generate(input_ids, max_new_tokens=16384) |
|
|
response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True) |
|
|
print(response) |
|
|
``` |
|
|
|
|
|
## Evaluation |
|
|
|
|
|
Seed-Coder-8B-Reasoning has been evaluated extensively on reasoning-intensive code benchmarks, showing: |
|
|
- Significant improvements on **competitive programming** datasets and coding challenges. |
|
|
- Enhanced ability to **break down complex problems**, **design correct algorithms**, and **produce efficient implementations**. |
|
|
- Strong generalization to unseen problems across multiple domains (math, strings, arrays, graphs, DP, etc.). |
|
|
|
|
|
<table> |
|
|
<tr> |
|
|
<th rowspan="2">Model</th> |
|
|
<th colspan="3">LiveCodeBench-Hard</th> |
|
|
<th colspan="3">LiveCodeBench-Medium</th> |
|
|
<th colspan="3">LiveCodeBench-Easy</th> |
|
|
<th rowspan="2">Overall</th> |
|
|
</tr> |
|
|
<tr> |
|
|
<th>4mon</th><th>3mon</th><th>2mon</th> |
|
|
<th>4mon</th><th>3mon</th><th>2mon</th> |
|
|
<th>4mon</th><th>3mon</th><th>2mon</th> |
|
|
</tr> |
|
|
|
|
|
<!-- ~8B Models --> |
|
|
<tr><td colspan="11"><b>~8B Models</b></td></tr> |
|
|
<tr> |
|
|
<td>DeepSeek-R1-Distill-Qwen-7B</td> |
|
|
<td>11.3</td><td>10.7</td><td>9.6</td> |
|
|
<td>39.6</td><td>37.2</td><td>37.1</td> |
|
|
<td>76.2</td><td>77.1</td><td>67.1</td> |
|
|
<td>36.5</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td>DeepSeek-R1-Distill-Seed-Coder-8B</td> |
|
|
<td>13.6</td><td>13.9</td><td>13.4</td> |
|
|
<td>39.6</td><td>38.7</td><td>39.3</td> |
|
|
<td>79.8</td><td>80.2</td><td>73.2</td> |
|
|
<td>39.0</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td>OlympicCoder-7B</td> |
|
|
<td>12.7</td><td>11.8</td><td>12.5</td> |
|
|
<td>40.8</td><td>39.0</td><td>38.7</td> |
|
|
<td>78.0</td><td>77.1</td><td>67.8</td> |
|
|
<td>37.9</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td>Qwen3-8B-thinking</td> |
|
|
<td>27.5</td><td>23.5</td><td>19.7</td> |
|
|
<td>65.7</td><td>59.7</td><td>58.5</td> |
|
|
<td>98.0</td><td>98.1</td><td>97.3</td> |
|
|
<td>57.4</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td>Seed-Coder-8B-Reasoning</td> |
|
|
<td>27.6</td><td>28.0</td><td>31.0</td> |
|
|
<td>65.8</td><td>59.2</td><td>57.5</td> |
|
|
<td>87.8</td><td>88.0</td><td>80.1</td> |
|
|
<td>53.6</td> |
|
|
</tr> |
|
|
|
|
|
<!-- 13B+ Models --> |
|
|
<tr><td colspan="11"><b>13B+ Models</b></td></tr> |
|
|
<tr> |
|
|
<td>DeepSeek-R1-Distill-Qwen-14B</td> |
|
|
<td>21.3</td><td>20.5</td><td>16.1</td> |
|
|
<td>58.1</td><td>53.4</td><td>51.4</td> |
|
|
<td>93.3</td><td>94.2</td><td>93.7</td> |
|
|
<td>51.9</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td>Claude-3.7-Sonnet-thinking</td> |
|
|
<td>27.3</td><td>30.8</td><td>31.0</td> |
|
|
<td>54.5</td><td>55.1</td><td>51.4</td> |
|
|
<td>96.2</td><td>100.0</td><td>100.0</td> |
|
|
<td>53.3</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td>o3-mini-low</td> |
|
|
<td>30.3</td><td>32.3</td><td>28.6</td> |
|
|
<td>69.6</td><td>61.2</td><td>54.1</td> |
|
|
<td>98.7</td><td>100.0</td><td>100.0</td> |
|
|
<td>59.4</td> |
|
|
</tr> |
|
|
</table> |
|
|
|
|
|
|
|
|
For detailed benchmark performance, please refer to our [📑 technical report](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf). |
|
|
|