Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,15 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-nc-4.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
datasets:
|
| 4 |
+
- fdqerq22ds/MathScaleQA-2M
|
| 5 |
+
---
|
| 6 |
+
## Overview
|
| 7 |
+
This is a reproduced MathScale-Mistral model by finetuning the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on our reproduced [MathScaleQA-2M](https://huggingface.co/datasets/fdqerq22ds/MathScaleQA-2M) dataset, following the hyperparameters in their [original paper](https://arxiv.org/abs/2403.02884) to ensure the effectiveness of our reproduction.
|
| 8 |
+
|
| 9 |
+
## Reproduction Details
|
| 10 |
+
Fortunately, the reproduction was smooth, and we managed to match the reported performance metrics when evaluating on their [MWPBench](https://github.com/microsoft/unilm/tree/master/mathscale/MWPBench). Below, we present a comparison between the performance of their official model and our reproduced model:
|
| 11 |
+
|
| 12 |
+
| Model | GSM8K | MATH | CollegeMath | TAL | Math23k | Ape210k | GaokaoBench-Math | AGIE-Gaokao-Math | AGIE-SAT-Math | AGIE-MATH | MicroAverage | MacroAverage |
|
| 13 |
+
|-------------------------------|-------|------|-------------|------|---------|---------|------------------|------------------|---------------|-----------|--------------|--------------|
|
| 14 |
+
| Official MathScale-Mistral | 74.8 | 35.2 | 21.8 | 39.9 | 64.4 | 46.0 | 21.4 | 14.3 | 57.8 | 32.9 | 38.7 | 40.8 |
|
| 15 |
+
| [Reproduced MathScale-Mistral](https://huggingface.co/fdqerq22ds/MathScale-Mistral) | 74.0 | 34.5 | 22.0 | 39.6 | 61.7 | 45.1 | 21.6 | 15.5 | 56.8 | 34.4 | 38.3 | 40.5 |
|