File size: 7,963 Bytes
863e9b4 60c89bf 863e9b4 60c89bf 863e9b4 12089e0 60c89bf 863e9b4 60c89bf ded0cc2 60c89bf 863e9b4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
---
license: mit
library_name: exllamav2
pipeline_tag: text-generation
base_model: ByteDance-Seed/Seed-Coder-8B-Base
---
# Seed-Coder-8B-Base-exl2
Original model: [Seed-Coder-8B-Base](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) by [ByteDance Seed](https://huggingface.co/ByteDance-Seed)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/Seed-Coder-8B-Base/tree/main)
[4.5bpw h6](https://huggingface.co/cgus/Seed-Coder-8B-Base/tree/4.5bpw-h6)
[5bpw h6](https://huggingface.co/cgus/Seed-Coder-8B-Base/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/Seed-Coder-8B-Base/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/Seed-Coder-8B-Base/tree/8bpw-h8)
## Quantization notes
Made with Exllamav2 0.3.1 with default dataset.
These quants require a Nvidia RTX GPU on Windows or RTX/AMD ROCm on Linux and can be used with TabbyAPI or Text-Generation-WebUI.
Since it's Base model, it's mostly useful for text completion via v1/completions API endpoints to be used for code completion in apps like Continue.Dev or other local Copilot tools.
# Original model card
# Seed-Coder-8B-Base
<div align="left" style="line-height: 1;">
<a href="https://bytedance-seed-coder.github.io/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://img.shields.io/badge/Seed--Coder-Homepage-a468fe?color=a468fe&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://arxiv.org/abs/2506.03524" target="_blank" style="margin: 2px;">
<img alt="Technical Report" src="https://img.shields.io/badge/arXiv-Technical%20Report-brightgreen?logo=arxiv&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/ByteDance-Seed" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-ByteDance%20Seed-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?color=f5de53&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
## Introduction
We are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.
- **Model-centric:** Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.
- **Transparent:** We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.
- **Powerful:** Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.
<p align="center">
<img width="100%" src="imgs/seed-coder_intro_performance.png">
</p>
This repo contains the **Seed-Coder-8B-Base** model, with the following features:
- Type: Causal language models
- Training Stage: Pretraining
- Data Source: GitHub data, code-related web data
- Training Tokens: 6 trillion
- Supports: Code completion, code infilling (Fill-in-the-Middle)
- Context Length: 32,768
## Model Downloads
| Model Name | Length | Download | Notes |
|---------------------------------------------------------|--------|------------------------------------|-----------------------|
| 👉 **Seed-Coder-8B-Base** | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) | Pretrained on our model-centric code data. |
| Seed-Coder-8B-Instruct | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) | Instruction-tuned for alignment with user intent. |
| Seed-Coder-8B-Reasoning | 64K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning) | RL trained to boost reasoning capabilities. |
| Seed-Coder-8B-Reasoning-bf16 | 64K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning-bf16) | RL trained to boost reasoning capabilities. |
## Requirements
You will need to install the latest versions of `transformers` and `accelerate`:
```bash
pip install -U transformers accelerate
```
## Quickstart
Here is a simple example demonstrating how to load the model and perform code generation using the Hugging Face `pipeline` API:
```python
import transformers
import torch
model_id = "ByteDance-Seed/Seed-Coder-8B-Base"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
output = pipeline("def say_hello_world():", max_new_tokens=100)
print(output[0]["generated_text"])
```
### Fill-in-the-Middle (FIM) Example
Seed-Coder-8B-Base natively supports **Fill-in-the-Middle (FIM)** tasks, where the model is given a prefix and a suffix and asked to predict the missing middle content. This allows for code infilling scenarios such as completing a function body or inserting missing logic between two pieces of code.
A typical example:
```python
import transformers
import torch
model_id = "ByteDance-Seed/Seed-Coder-8B-Base"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
# You can concatenate a prefix, a special FIM separator token, and a suffix
prefix = "def add_numbers(a, b):\n "
suffix = "\n return result"
# Combine prefix and suffix following the FIM format
fim_input = '<[fim-suffix]>' + suffix + '<[fim-prefix]>' + prefix + '<[fim-middle]>'
output = pipeline(fim_input, max_new_tokens=512)
print(output[0]["generated_text"])
```
## Evaluation
Seed-Coder-8B-Base has been evaluated on code generation, code completion, and code reasoning benchmarks, achieving state-of-the-art performance among ~8B open-source models.
| | DeepSeek-Coder-6.7B-Base | OpenCoder-8B-Base | Qwen2.5-Coder-7B | Seed-Coder-8B-Base |
|------------|:------------------------:|:-----------------:|:----------------:|:------------------:|
| HumanEval | 47.6 | 66.5 | 72.0 | **77.4** |
| MBPP | 70.2 | 79.9 | 79.4 | **82.0** |
| MultiPL-E | 44.7 | 61.0 | 58.8 | **67.6** |
| cruxeval-O | 41.0 | 43.9 | **56.0** | 54.8 |
For detailed benchmark performance, please refer to our [📑 Technical Report](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf).
## License
This project is licensed under the MIT License. See the [LICENSE file](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE) for details.
## Citation
If you find Seed-Coder helpful, please consider citing our work:
```
@misc{seed2025seedcoderletcodemodel,
title={{Seed-Coder}: Let the Code Model Curate Data for Itself},
author={{ByteDance Seed} and Yuyu Zhang and Jing Su and Yifan Sun and Chenguang Xi and Xia Xiao and Shen Zheng and Anxiang Zhang and Kaibo Liu and Daoguang Zan and Tao Sun and Jinhua Zhu and Shulin Xin and Dong Huang and Yetao Bai and Lixin Dong and Chao Li and Jianchong Chen and Hanzhi Zhou and Yifan Huang and Guanghan Ning and Xierui Song and Jiaze Chen and Siyao Liu and Kai Shen and Liang Xiang and Yonghui Wu},
year={2025},
eprint={2506.03524},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.03524},
}
``` |