metadata
base_model: Qwen/Qwen3-8B
datasets:
- Jasaxion/MathSmith-HC-Problems
language:
- en
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
tags:
- verl
- math
- synthetic-data
MathSmith-HC-Problem-Synthesizer-Qwen3-8B
MathSmith: Towards Extremely Hard Mathematical Reasoning by Forging Synthetic Problems with a Reinforced Policy
Overview
MathSmith is a framework for synthesizing challenging mathematical problems to enhance LLM reasoning. This model is a reinforced policy-based synthesizer optimized to generate novel, Olympiad-level mathematical problems from scratch.
The model generates <rationale>–<problem> pairs, where:
<rationale>: structured reasoning describing concept integration and difficulty design strategies.<problem>: a single Olympiad-level mathematical question that admits a verifiable numeric or symbolic answer.
MathSmith-HC (High Consistency) combines complexity and consistency as difficulty rewards during reinforcement learning, producing more stable problems than the version optimized solely for complexity.
MathSmith Pipeline
The MathSmith framework consists of four main stages:
- Concept Collection: Randomly sample concept–explanation pairs from PlanetMath to ensure data independence and avoid benchmark contamination.
- Supervised Fine-tuning (SFT): Train the model on collected concept–explanation pairs to establish foundational understanding of problem generation.
- Reinforcement Learning (RL): Optimize the model using GRPO with rewards based on structural validity, reasoning complexity (trace length), and answer consistency.
- Weakness-Focused Self-Improvement: Iteratively identify and address model weaknesses by generating targeted problem variants for specific mathematical concepts.
Dependence
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
Citation
If you find this work useful, please cite:
@article{zhan2025mathsmith,
title={MathSmith: Towards Extremely Hard Mathematical Reasoning by Forging Synthetic Problems with a Reinforced Policy},
author={Zhan, Shaoxiong and Lai, Yanlin and Lu, Ziyu and Lin, Dahua and Yang, Ziqing and Tan, Fei},
journal={arXiv preprint arXiv:2508.05592},
year={2025}
}