Model Card for DAPO w/ Prompt Augmentation (Outdated)

###### THIS MODEL IS OUTDATED #######

###### THIS MODEL IS OUTDATED #######

###### THIS MODEL IS OUTDATED #######

For more checkpoints with better performance, please refer to DAPO w/ Prompt Augmentation (step 2720) and DAPO w/ Prompt Augmentation (step 2480)

This is the step 2820 checkpoint when training Qwen2.5-Math-1.5B on MATH Level-3-to-5 Dataset using DAPO (no dynamic sampling) with prompt augmentation. The training procedure is outlined in the paper Prompt Augmentation Scales up GRPO Training on Mathematical Reasoning.

Model Sources

Uses

This model is intended for mathematical reasoning tasks. It leverages prompt augmentation to generate reasoning traces under diverse templates, increasing rollout diversity and stability during RL training.

Results

Citation

@misc{lu2026promptaugmentationscalesgrpo,
      title={Prompt Augmentation Scales up GRPO Training on Mathematical Reasoning}, 
      author={Wenquan Lu and Hai Huang and Randall Balestriero},
      journal={arXiv preprint arXiv:2602.03190},
      year={2026},
}
Downloads last month
29
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for daviddavidlu/DAPO-with-prompt-augmentation-step2820

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(159)
this model

Paper for daviddavidlu/DAPO-with-prompt-augmentation-step2820