license: mit
base_model:
- ByteDance-Seed/Seed-Coder-8B-Base
Seed-Coder-8B-Reasoning
Introduction
We are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.
- Model-centric: Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.
- Transparent: We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.
- Powerful: Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.
This repo contains Seed-Coder-8B-Reasoning model, which has the following features:
- Type: Causal language models
- Training Stage: Pretraining & Post-training
- Data Source: Public datasets
- Context Length: 32,768
Model Downloads
| Model Name | Length | Download | Notes |
|---|---|---|---|
| Seed-Coder-8B-Base | 32K | 🤗 Model | Pretrained on our model-centric code data. |
| Seed-Coder-8B-Instruct | 32K | 🤗 Model | Instruction-tuned for alignment with user intent. |
| 👉 Seed-Coder-8B-Reasoning | 32K | 🤗 Model | RL trained to boost reasoning capabilities. |
Requirements
You will need to install the latest versions of transformers and accelerate:
pip install -U transformers accelerate
Quickstart
Here is a simple example demonstrating how to load the model and perform code generation using the Hugging Face pipeline API:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "ByteDance-Seed/Seed-Coder-8B-Reasoning"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
messages = [
{"role": "user", "content": "Write a quick sort algorithm."},
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
return_tensors="pt",
add_generation_prompt=True,
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=16384)
response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
print(response)
Evaluation
Seed-Coder-8B-Reasoning has been evaluated extensively on reasoning-intensive code benchmarks, showing:
- Significant improvements on competitive programming datasets and coding challenges.
- Enhanced ability to break down complex problems, design correct algorithms, and produce efficient implementations.
- Strong generalization to unseen problems across multiple domains (math, strings, arrays, graphs, DP, etc.).
| Model | LiveCodeBench-Hard | LiveCodeBench-Medium | LiveCodeBench-Easy | Overall | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| 4mon | 3mon | 2mon | 4mon | 3mon | 2mon | 4mon | 3mon | 2mon | ||
| ~8B Models | ||||||||||
| DeepSeek-R1-Distill-Qwen-7B | 11.3 | 10.7 | 9.6 | 39.6 | 37.2 | 37.1 | 76.2 | 77.1 | 67.1 | 36.5 |
| DeepSeek-R1-Distill-Seed-Coder-8B | 13.6 | 13.9 | 13.4 | 39.6 | 38.7 | 39.3 | 79.8 | 80.2 | 73.2 | 39.0 |
| OlympicCoder-7B | 12.7 | 11.8 | 12.5 | 40.8 | 39.0 | 38.7 | 78.0 | 77.1 | 67.8 | 37.9 |
| Qwen3-8B-thinking | 27.5 | 23.5 | 19.7 | 65.7 | 59.7 | 58.5 | 98.0 | 98.1 | 97.3 | 57.4 |
| Seed-Coder-8B-Reasoning | 27.6 | 28.0 | 31.0 | 65.8 | 59.2 | 57.5 | 87.8 | 88.0 | 80.1 | 53.6 |
| 13B+ Models | ||||||||||
| DeepSeek-R1-Distill-Qwen-14B | 21.3 | 20.5 | 16.1 | 58.1 | 53.4 | 51.4 | 93.3 | 94.2 | 93.7 | 51.9 |
| Claude-3.7-Sonnet-thinking | 27.3 | 30.8 | 31.0 | 54.5 | 55.1 | 51.4 | 96.2 | 100.0 | 100.0 | 53.3 |
| o3-mini-low | 30.3 | 32.3 | 28.6 | 69.6 | 61.2 | 54.1 | 98.7 | 100.0 | 100.0 | 59.4 |
For detailed benchmark performance, please refer to our 📑 Technical Report.