One-shot Entropy Minimization
This model is described in the paper One-shot Entropy Minimization.
We trained 13,440 large language models and found that entropy minimization requires only a single unlabeled data and 10 steps optimization to achieve performance improvements comparable to or even greater than those obtained using thousands of data and carefully designed rewards in rule-based reinforcement learning. This striking result may prompt a rethinking of post-training paradigms for large language models.
Code: https://github.com/zitian-gao/one-shot-em Project Page: https://www.notion.so/One-shot-Entropy-Minimization-202606db813b80639773f850f39246a5
Installation
pip install torch transformers==4.47.1 accelerate deepspeed psutil pandas numpy wandb
Reproducing One-shot EM Training (SOTA)
accelerate launch train.py \
--model_name Qwen2.5-Math-7B \
--model_path /path/to/Qwen2.5-Math-7B \
--train_data dataset/1shot_rlvr/pi1_r1280.parquet \
--effective_batch 64 \
--micro_batch_size 2 \
--temperature 0.5 \
--learning_rate 2e-5 \
--max_steps 50 \
--log_steps 1 \
--save_steps 1 \
--run_name one_shot \
--wandb_project one-shot-em
Reproducing Multi-shot EM Training
accelerate launch train.py \
--model_name Qwen2.5-Math-7B \
--model_path /path/to/Qwen2.5-Math-7B \
--train_data dataset/numina/numina_00.parquet \
--effective_batch 64 \
--micro_batch_size 2 \
--temperature 0.5 \
--learning_rate 2e-5 \
--max_steps 50 \
--log_steps 1 \
--save_steps 1 \
--run_name multi_shot \
--wandb_project one-shot-em
Evaluation
cd Qwen2.5-Eval/evaluation
bash sh/eval_all_math.sh
Acknowledgements
Our dataset references and builds upon the following open-source contributions:
- NuminaMath-CoT
- DeepScaler
- One-shot RLVR – for data selection strategies
- Qwen2.5-Eval – for evaluation benchmarks
We sincerely thank the authors and maintainers of these projects for their excellent contributions to the research community!
Citation
@misc{gao2025oneshotentropyminimization,
title={One-shot Entropy Minimization},
author={Zitian Gao and Lynx Chen and Joey Zhou and Bryan Dai},
year={2025},
eprint={2505.20282},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.20282},
}
- Downloads last month
- 6