metadata
license: apache-2.0
base_model:
- Qwen/Qwen3-8B-Base
datasets:
- IIGroup/X-Coder-SFT-376k
language:
- en
tags:
- code
- sft
- competitive-programming
X-Coder-SFT-Qwen3-8B
X-Coder-SFT-Qwen3-8B is a code generation model fine-tuned on fully synthetic instruction data, designed for competitive programming tasks. It serves as the foundation for subsequent RLVR training.
Model Description
- Base Model: Qwen/Qwen3-8B-Base
- Training Method: Supervised Fine-Tuning (SFT)
- Training Data: IIGroup/X-Coder-SFT-376k
- Parameters: 8B
Training
This model was trained using ms-swift. For training details and code, please refer to the X-Coder GitHub repository.
Training Hyperparameters
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen3-8B-Base |
| Training Type | Full Parameter |
| Epochs | 8 |
| Global Batch Size | 128 |
| Learning Rate | 5e-5 |
| Max Grad Norm | 1.0 |
| Max Length | 32768 |
| Torch Dtype | bfloat16 |
| DeepSpeed | Zero3 Offload (80GB VRAM) / Zero2 (142GB VRAM) |
| Packing | True (2x faster training, slightly worse performance) |
Performance on LiveCodeBench v5.
Recommended Inference Parameters
| Parameter | Value |
|---|---|
| temperature | 0.6 |
| top_p | 0.95 |
| top_k | 20 (or -1 to disable) |
| max_new_tokens | 32768 |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "IIGroup/X-Coder-SFT-Qwen3-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
prompt = "Write a Python function to solve the two sum problem."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=32768,
temperature=0.6,
top_p=0.95,
top_k=20,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Related Models
- RL Model: IIGroup/X-Coder-RL-Qwen3-8B - RLVR trained version achieving 64.0 on LiveCodeBench
Citation
@misc{wu2026xcoderadvancingcompetitiveprogramming,
title={X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and Tests},
author={Jie Wu and Haoling Li and Xin Zhang and Jiani Guo and Jane Luo and Steven Liu and Yangyu Huang and Ruihang Chu and Scarlett Li and Yujiu Yang},
year={2026},
eprint={2601.06953},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2601.06953},
}
License
This project is licensed under the Apache License 2.0.
