--- license: mit base_model: - ByteDance-Seed/Seed-Coder-8B-Base --- # Seed-Coder-8B-Reasoning ## Introduction **Seed-Coder-8B-Reasoning** is an 8-billion-parameter model further optimized for **code reasoning**, **problem-solving**, and **algorithmic thinking** tasks. Built upon the strong base of Seed-Coder, it undergoes additional training in sandbox environments to significantly enhance its ability to tackle complex coding problems and competitions. It features: - Trained on a **massively curated corpus**, filtered using an **LLM-based method** to ensure high-quality real-world code, text-code alignment, and synthetic datasets. - **Sandbox fine-tuning** to specifically strengthen **multi-step reasoning**, **algorithm design**, and **competitive programming** capabilities. - Maintains **long-context handling** up to 32K tokens, enabling it to reason over extended problem descriptions and large input-output examples.

## Model Downloads | Model Name | Length | Download | Notes | |---------------------------------------------------------|-----------|------------------------------------|-----------------------| | Seed-Coder-8B-Base | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) | Pretrained on our model-centric code data. | | Seed-Coder-8B-Instruct | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) | Instruction-tuned for alignment with user intent. | | 👉 **Seed-Coder-8B-Reasoning** | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning) | RL trained to boost reasoning capabilities. | ## Requirements You will need to install the latest versions of `transformers` and `accelerate`: ```bash pip install -U transformers accelerate ``` ## Quickstart Here is a simple example demonstrating how to load the model and perform code generation using the Hugging Face `pipeline` API: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "ByteDance-Seed/Seed-Coder-8B-Reasoning" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True) messages = [ {"role": "user", "content": "Write a quick sort algorithm."}, ] input_ids = tokenizer.apply_chat_template( messages, tokenize=True, return_tensors="pt", add_generation_prompt=True, ).to(model.device) outputs = model.generate(input_ids, max_new_tokens=16384) response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True) print(response) ``` ## Evaluation Seed-Coder-8B-Reasoning has been evaluated extensively on reasoning-intensive code benchmarks, showing: - Significant improvements on **competitive programming** datasets and coding challenges. - Enhanced ability to **break down complex problems**, **design correct algorithms**, and **produce efficient implementations**. - Strong generalization to unseen problems across multiple domains (math, strings, arrays, graphs, DP, etc.).
Model LiveCodeBench-Hard LiveCodeBench-Medium LiveCodeBench-Easy Overall
4mon3mon2mon 4mon3mon2mon 4mon3mon2mon
~8B Models
DeepSeek-R1-Distill-Qwen-7B 11.310.79.6 39.637.237.1 76.277.167.1 36.5
DeepSeek-R1-Distill-Seed-Coder-8B 13.613.913.4 39.638.739.3 79.880.273.2 39.0
OlympicCoder-7B 12.711.812.5 40.839.038.7 78.077.167.8 37.9
Qwen3-8B-thinking 27.523.519.7 65.759.758.5 98.098.197.3 57.4
Seed-Coder-8B-Reasoning 27.628.031.0 65.859.257.5 87.888.080.1 53.6
13B+ Models
DeepSeek-R1-Distill-Qwen-14B 21.320.516.1 58.153.451.4 93.394.293.7 51.9
Claude-3.7-Sonnet-thinking 27.330.831.0 54.555.151.4 96.2100.0100.0 53.3
o3-mini-low 30.332.328.6 69.661.254.1 98.7100.0100.0 59.4
For detailed benchmark performance, please refer to our [📑 technical report](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf).