X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and Tests
Paper
•
2601.06953
•
Published
•
11
X-Coder-RL-Qwen2.5-7B is a strong code reasoning foundation model trained with RLVR on fully synthetic rl data, achieving strong reasoning performance on competitive programming.
This model was trained using the X-Coder RLVR framework. For training details and code, please refer to the X-Coder GitHub repository.
Performance on LiveCodeBench v5.
| Parameter | Value |
|---|---|
| temperature | 0.6 |
| top_p | 0.95 |
| top_k | 20 (or -1 to disable) |
| max_new_tokens | 32768 |
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "IIGroup/X-Coder-RL-Qwen2.5-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
prompt = "Write a Python function to solve the two sum problem."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=32768,
temperature=0.6,
top_p=0.95,
top_k=20,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
@misc{wu2026xcoderadvancingcompetitiveprogramming,
title={X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and Tests},
author={Jie Wu and Haoling Li and Xin Zhang and Jiani Guo and Jane Luo and Steven Liu and Yangyu Huang and Ruihang Chu and Scarlett Li and Yujiu Yang},
year={2026},
eprint={2601.06953},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2601.06953},
}
This project is licensed under the Apache License 2.0.
Base model
Qwen/Qwen2.5-7B