X-Coder-SFT-Qwen3-8B

X-Coder-SFT-Qwen3-8B is a code generation model fine-tuned on fully synthetic instruction data, designed for competitive programming tasks. It serves as the foundation for subsequent RLVR training.

Model Description

Training

This model was trained using ms-swift. For training details and code, please refer to the X-Coder GitHub repository.

Training Hyperparameters

Parameter Value
Base Model Qwen/Qwen3-8B-Base
Training Type Full Parameter
Epochs 8
Global Batch Size 128
Learning Rate 5e-5
Max Grad Norm 1.0
Max Length 32768
Torch Dtype bfloat16
DeepSpeed Zero3 Offload (80GB VRAM) / Zero2 (142GB VRAM)
Packing True (2x faster training, slightly worse performance)

Performance on LiveCodeBench v5.

Results

Recommended Inference Parameters

Parameter Value
temperature 0.6
top_p 0.95
top_k 20 (or -1 to disable)
max_new_tokens 32768

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "IIGroup/X-Coder-SFT-Qwen3-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")

prompt = "Write a Python function to solve the two sum problem."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=32768,
    temperature=0.6,
    top_p=0.95,
    top_k=20,
    do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Related Models

Citation

@misc{wu2026xcoderadvancingcompetitiveprogramming,
      title={X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and Tests}, 
      author={Jie Wu and Haoling Li and Xin Zhang and Jiani Guo and Jane Luo and Steven Liu and Yangyu Huang and Ruihang Chu and Scarlett Li and Yujiu Yang},
      year={2026},
      eprint={2601.06953},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2601.06953}, 
}

License

This project is licensed under the Apache License 2.0.

Downloads last month
28
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for IIGroup/X-Coder-SFT-Qwen3-8B

Base model

Qwen/Qwen3-8B-Base
Finetuned
(314)
this model
Finetunes
1 model
Quantizations
2 models

Dataset used to train IIGroup/X-Coder-SFT-Qwen3-8B

Paper for IIGroup/X-Coder-SFT-Qwen3-8B