Improve model card: Update metadata and enrich content for DiffGen-8B
Browse filesThis PR enhances the model card for DiffGen-8B, a key component of the ScaleDiff pipeline, by improving its discoverability and providing more comprehensive information.
Key updates include:
* Changing the `license` from `other` to `apache-2.0`, aligning with common practices for research artifacts and inferred from the GitHub repository's license badge.
* Adding the `pipeline_tag: text-generation` to accurately categorize the model's function as a problem generator.
* Adding `math-reasoning` to the `tags` for better discoverability within the mathematical domain.
* Expanding the "Model description", "Intended uses & limitations", and "Training and evaluation data" sections with detailed information extracted from the associated paper abstract and GitHub README.
* Removing the auto-generated comment at the top of the model card.
A `Sample Usage` section is not included, as the provided GitHub README does not contain a direct, ready-to-use inference code snippet for this specific model using the `transformers` library, adhering to the guideline of not creating code snippets without evidence.
|
@@ -1,38 +1,39 @@
|
|
| 1 |
---
|
| 2 |
-
library_name: transformers
|
| 3 |
-
license: other
|
| 4 |
base_model: Qwen/Qwen3-8B-Base
|
|
|
|
|
|
|
| 5 |
tags:
|
| 6 |
- llama-factory
|
| 7 |
- full
|
| 8 |
- generated_from_trainer
|
|
|
|
|
|
|
| 9 |
model-index:
|
| 10 |
- name: DiffGen-8B
|
| 11 |
results: []
|
| 12 |
---
|
| 13 |
|
| 14 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 15 |
-
should probably proofread and complete it, then remove this comment. -->
|
| 16 |
-
|
| 17 |
Paper: [ScaleDiff: Scaling Difficult Problems for Advanced Mathematical Reasoning](https://arxiv.org/abs/2509.21070)
|
| 18 |
|
| 19 |
Code: https://github.com/QizhiPei/ScaleDiff
|
| 20 |
|
| 21 |
# DiffGen-8B
|
| 22 |
|
| 23 |
-
This model is a fine-tuned version of [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base)
|
| 24 |
|
| 25 |
## Model description
|
| 26 |
|
| 27 |
-
|
| 28 |
|
| 29 |
## Intended uses & limitations
|
| 30 |
|
| 31 |
-
|
|
|
|
|
|
|
| 32 |
|
| 33 |
## Training and evaluation data
|
| 34 |
|
| 35 |
-
|
| 36 |
|
| 37 |
## Training procedure
|
| 38 |
|
|
@@ -61,4 +62,4 @@ The following hyperparameters were used during training:
|
|
| 61 |
- Transformers 4.52.0.dev0
|
| 62 |
- Pytorch 2.6.0+cu124
|
| 63 |
- Datasets 2.21.0
|
| 64 |
-
- Tokenizers 0.21.1
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
base_model: Qwen/Qwen3-8B-Base
|
| 3 |
+
library_name: transformers
|
| 4 |
+
license: apache-2.0
|
| 5 |
tags:
|
| 6 |
- llama-factory
|
| 7 |
- full
|
| 8 |
- generated_from_trainer
|
| 9 |
+
- math-reasoning
|
| 10 |
+
pipeline_tag: text-generation
|
| 11 |
model-index:
|
| 12 |
- name: DiffGen-8B
|
| 13 |
results: []
|
| 14 |
---
|
| 15 |
|
|
|
|
|
|
|
|
|
|
| 16 |
Paper: [ScaleDiff: Scaling Difficult Problems for Advanced Mathematical Reasoning](https://arxiv.org/abs/2509.21070)
|
| 17 |
|
| 18 |
Code: https://github.com/QizhiPei/ScaleDiff
|
| 19 |
|
| 20 |
# DiffGen-8B
|
| 21 |
|
| 22 |
+
This model is a fine-tuned version of [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base).
|
| 23 |
|
| 24 |
## Model description
|
| 25 |
|
| 26 |
+
DiffGen-8B is a specialized difficult problem generator developed as part of the ScaleDiff pipeline, an approach designed to scale the creation of challenging mathematical problems for advanced mathematical reasoning. The model is trained on a filtered dataset of difficult problems, enabling it to efficiently produce a vast number of new, complex mathematical problems. This process eliminates the need for complex, per-instance prompting and its associated high API costs, addressing the scarcity of high-quality, difficult training data for Large Reasoning Models (LRMs).
|
| 27 |
|
| 28 |
## Intended uses & limitations
|
| 29 |
|
| 30 |
+
**Intended Uses**: DiffGen-8B is primarily intended for generating large-scale datasets of challenging mathematical problems. These generated problems are then used to augment training data for Large Reasoning Models (LRMs), thereby enhancing their mathematical reasoning capabilities. It serves as a crucial component in pipelines focused on improving LRM performance on difficult benchmarks by providing a continuous supply of intricate reasoning challenges.
|
| 31 |
+
|
| 32 |
+
**Limitations**: While DiffGen-8B excels at generating difficult problems, its primary scope is mathematical problem generation. The quality and relevance of the generated problems are further ensured through subsequent solution distillation and filtering steps within the broader ScaleDiff pipeline. Its performance may not be optimized for other general text generation tasks.
|
| 33 |
|
| 34 |
## Training and evaluation data
|
| 35 |
|
| 36 |
+
DiffGen-8B is a fine-tuned version of [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base). It was trained on a subset of difficult problems selected from the [AM-Qwen3-Distilled](https://huggingface.co/datasets/a-m-team/AM-Qwen3-Distilled) dataset. This selection was performed efficiently using [AdaptThink](https://huggingface.co/THU-KEG/AdaptThink-7B-delta0.05), an adaptive thinking model that perceives problem difficulty with only a single forward pass, eliminating the need for solutions during selection. The problems generated by DiffGen-8B contribute to the creation of the [ScaleDiff-Math](https://huggingface.co/datasets/QizhiPei/ScaleDiff-Math) dataset.
|
| 37 |
|
| 38 |
## Training procedure
|
| 39 |
|
|
|
|
| 62 |
- Transformers 4.52.0.dev0
|
| 63 |
- Pytorch 2.6.0+cu124
|
| 64 |
- Datasets 2.21.0
|
| 65 |
+
- Tokenizers 0.21.1
|