Improve model card: correct pipeline tag and add library name
Browse filesThis PR corrects the `pipeline_tag` to `question-answering`, which is more accurate given the model's application in mathematical reasoning. It also adds the `library_name` to the metadata for better interoperability.
README.md
CHANGED
|
@@ -1,11 +1,13 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
| 3 |
datasets:
|
| 4 |
- agentica-org/DeepScaleR-Preview-Dataset
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
---
|
| 11 |
-
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- Qwen/Qwen2.5-7B
|
| 4 |
datasets:
|
| 5 |
- agentica-org/DeepScaleR-Preview-Dataset
|
| 6 |
language:
|
| 7 |
- en
|
| 8 |
+
license: apache-2.0
|
| 9 |
+
pipeline_tag: question-answering
|
| 10 |
+
library_name: transformers
|
| 11 |
---
|
| 12 |
+
|
| 13 |
+
This is the model checkpoint associated with the paper [Pitfalls of Rule- and Model-based Verifiers -- A Case Study on Mathematical Reasoning](https://huggingface.co/papers/2505.22203). The model is RL trained from the Qwen-2.5-7B base on the DeepScaleR dataset. Training employed a hybrid verification strategy combining the Huggingface Math Verifier and the opensource [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
|