The official repository for the paper "CoDiQ: Test-Time Scaling for Controllable Difficult Question Generation"
π‘ Introduction
Large Reasoning Models (LRMs) benefit substantially from training on challenging, competition-level questions. However, existing automated synthesis methods struggle with "fake hard" questionsβproblems that are complex but unsolvable or ill-defined.
CoDiQ (Controllable Difficult Question Generation) is a novel framework that enables fine-grained difficulty control via test-time scaling while ensuring solvability.
Key innovations include:
- Test-Time Scaling Tendency: We identify that extending the reasoning token budget boosts difficulty but can reduce solvability.
- CoDiQ-Generator: A specialized model (finetuned from Qwen3-8B) that improves the upper bound of valid, high-difficulty question generation.
- CoDiQ-Corpus: A dataset of 44K competition-grade math and coding question sequences, which is significantly more challenging than LiveCodeBench and AIME.
Training LRMs on CoDiQ-Corpus substantially enhances downstream reasoning performance. The CoDiQ-Generator and CoDiQ-Corpus are released.
π Citation
If you find CoDiQ useful for your research, please consider citing our paper:
@article{codiq2026,
title={CoDiQ: Test-Time Scaling for Controllable Difficult Question Generation},
author={Zhongyuan Peng, Caijun Xu, Changyi Xiao, Shibo Hong, Eli Zhang, Stephen Huang, Yixin Cao},
journal={arXiv preprint arXiv:2602.01660},
year={2026}
}
- Downloads last month
- 12