Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
orz_math_difficulty / README.md
MaksimSTW's picture
Update README.md
c74bbcb verified
|
raw
history blame
1.42 kB
metadata
license: mit

Difficulty Estimation on Open Reasoner Zero

We annotate the entire Open Reasoner Zero dataset with a difficulty score based on the performance of the Qwen 2.5-MATH-7B model. This provides an adaptive signal for curriculum construction. Open Reasoner Zero is a curated a dataset of 57,000 reasoning-intensive problems used to train and evaluate reinforcement learning-based methods for large language models.

Difficulty Scoring Method

Difficulty scores are estimated using the Qwen 2.5-MATH-7B model with the following generation settings:

  • temperature = 0.6
  • top_p = 0.9
  • max_tokens=4096
  • Inference performed via vLLM
  • Each problem is attempted 128 times

The difficulty score for each problem is computed as:

 d_i = 100 × (1 - (# successes / 128))

This scoring approach ensures a balanced estimation: a strong model would trivially succeed on all problems, undermining difficulty measurement, while a weak model would fail uniformly, limiting the usefulness of the signal. Qwen 2.5-MATH-7B was chosen for its mid-range capabilities, providing informative gradients in problem difficulty across the dataset.

Contact

Feel free to contact Taiwei Shi (taiweish@usc.edu) if you have any questions.