--- base_model: QizhiPei/Qwen2.5-Math-7B-Instruct-RoPE-300k library_name: transformers license: apache-2.0 tags: - llama-factory - full - generated_from_trainer pipeline_tag: text-generation model-index: - name: DiffScale-7B results: [] --- Paper: [ScaleDiff: Scaling Difficult Problems for Advanced Mathematical Reasoning](https://arxiv.org/abs/2509.21070) Code: https://github.com/QizhiPei/ScaleDiff # DiffScale-7B This model is a fine-tuned version of [QizhiPei/Qwen2.5-Math-7B-Instruct-RoPE-300k](https://huggingface.co/QizhiPei/Qwen2.5-Math-7B-Instruct-RoPE-300k) on the ScaleDiff-Math dataset. ## Model description ScaleDiff-7B is a Large Reasoning Model (LRM) developed as part of the ScaleDiff pipeline, which is designed to scale the creation of challenging mathematical problems. This model, fine-tuned on the novel ScaleDiff-Math dataset, aims to enhance advanced mathematical reasoning capabilities by addressing the scarcity of high-quality, difficult training data. It leverages an adaptive thinking model for problem identification and a specialized generator (DiffGen-8B) for large-scale problem synthesis. ## Intended uses & limitations ScaleDiff-7B is intended for advanced mathematical reasoning tasks, offering significant improvements in complex problem-solving. It is particularly useful for researchers and practitioners looking to benchmark and develop LRMs on difficult mathematical challenges. **Limitations**: As a language model, its performance is dependent on the quality and scope of its training data. While designed for difficult problems, it may exhibit limitations in highly novel or out-of-distribution mathematical contexts. Further research is needed to fully understand its generalization capabilities beyond the specific benchmarks used in its evaluation. ## Training and evaluation data ScaleDiff-7B was fine-tuned on the custom-created [ScaleDiff-Math dataset](https://huggingface.co/datasets/QizhiPei/ScaleDiff-Math). This dataset is generated through a three-step pipeline: 1. **Problem Selection**: Difficult problems are identified from the [AM-Distilled-Dataset](https://huggingface.co/datasets/a-m-team/AM-Qwen3-Distilled) using AdaptThink, an adaptive thinking model. 2. **Problem Generation**: A dedicated problem generator, DiffGen-8B, is trained on these selected difficult problems to produce new, challenging problems. 3. **Solution Distillation and Filtration**: Long Chain-of-Thought (CoT) solutions for the newly generated problems are distilled using Qwen3-8B as a teacher model and then filtered for quality and relevance. The final ScaleDiff-Math dataset combines these new problem-solution pairs with an original dataset to provide a more effective training signal. Evaluation was conducted on a suite of difficult mathematical benchmarks including AIME'24, AIME'25, HMMT-Feb'25, BRUMO'25, and MATH500. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - total_train_batch_size: 32 - total_eval_batch_size: 256 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.4.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3