--- base_model: - Qwen/Qwen2.5-Coder-7B-Instruct library_name: transformers license: mit metrics: - accuracy pipeline_tag: text-generation ---

Z1: Efficient Test-time Scaling with Code

Train Large Language Model to Reason with Shifted Thinking

[📜 Paper][🤗 HF Models][🐱 GitHub]

## Model Details To begin with the shifted thinking mode, please refer to https://github.com/efficientscaling/Z1. ## Evaluation


## Citation ``` @misc{yu2025efficientscaling, title={Z1: Efficient Test-time Scaling with Code}, author={Zhaojian Yu and Yinghao Wu and Yilun Zhao and Arman Cohan and Xiao-Ping Zhang}, year={2025}, eprint={2504.00810}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2504.00810}, } ```