yuyuzhang's picture
Update README.md
d8b3c58
|
raw
history blame
3.65 kB
metadata
license: apache-2.0
base_model:
  - ByteDance-Seed/Seed-Coder-8B-Base

Seed-Coder-8B-Reasoning

Introduction

Seed-Coder-8B-Reasoning is an 8-billion-parameter model further optimized for code reasoning, problem-solving, and algorithmic thinking tasks.
Built upon the strong base of Seed-Coder, it undergoes additional training in sandbox environments to significantly enhance its ability to tackle complex coding problems and competitions. It features:

  • Trained on a massively curated corpus, filtered using an LLM-based method to ensure high-quality real-world code, text-code alignment, and synthetic datasets.
  • Sandbox fine-tuning to specifically strengthen multi-step reasoning, algorithm design, and competitive programming capabilities.
  • Maintains long-context handling up to 32K tokens, enabling it to reason over extended problem descriptions and large input-output examples.

Model Downloads

Model Name Type Length Download
Seed-Coder-8B-Base base 32k 🤗 Hugging Face
Seed-Coder-8B-Instruct instruct 32k 🤗 Hugging Face
👉Seed-Coder-8B-Reasoning reasoning 32k 🤗 Hugging Face

Requirements

You will need to install the latest versions of transformers and accelerate:

pip install -U transformers accelerate

Quickstart

Here is a simple example demonstrating how to load the model and perform code generation using the Hugging Face pipeline API:

import transformers
import torch

model_id = "ByteDance-Seed/Seed-Coder-8B-Reasoning"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "user", "content": "Solve the following problem: Given an array of integers, find two numbers such that they add up to a specific target number."},
]

outputs = pipeline(
    messages,
    max_new_tokens=512,
)
print(outputs[0]["generated_text"][-1]["content"])

Evaluation

Seed-Coder-8B-Reasoning has been evaluated extensively on reasoning-intensive code benchmarks, showing:

  • Significant improvements on competitive programming datasets and coding challenges.
  • Enhanced ability to break down complex problems, design correct algorithms, and produce efficient implementations.
  • Strong generalization to unseen problems across multiple domains (math, strings, arrays, graphs, DP, etc.).

For detailed results, please check our 📑 paper.

Citation

If you find our work helpful, please consider citing our work:

@article{zhang2025seedcoder,
    title={Seed-Coder: Let the Code Model Curate Data for Itself},
    author={Xxx},
    year={2025},
    eprint={2504.xxxxx},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/xxxx.xxxxx}, 
}