Commit
·
5ab7cfe
1
Parent(s):
b2e88d0
Update README.md
Browse files
README.md
CHANGED
|
@@ -18,15 +18,15 @@ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co
|
|
| 18 |
|
| 19 |
## Model description
|
| 20 |
|
| 21 |
-
|
| 22 |
|
| 23 |
## Intended uses & limitations
|
| 24 |
|
| 25 |
-
|
| 26 |
|
| 27 |
## Training and evaluation data
|
| 28 |
|
| 29 |
-
|
| 30 |
|
| 31 |
## Training procedure
|
| 32 |
|
|
@@ -43,7 +43,22 @@ The following hyperparameters were used during training:
|
|
| 43 |
|
| 44 |
### Training results
|
| 45 |
|
| 46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
### Framework versions
|
| 49 |
|
|
|
|
| 18 |
|
| 19 |
## Model description
|
| 20 |
|
| 21 |
+
See the [bert-base-uncased](https://huggingface.co/bert-base-uncased) for more details. The only architectural modification made was to the classification head. Here, 7 classes were used.
|
| 22 |
|
| 23 |
## Intended uses & limitations
|
| 24 |
|
| 25 |
+
This model is intended for demonstration purposes only. The problem type data was in English and contains many LaTeX tokens.
|
| 26 |
|
| 27 |
## Training and evaluation data
|
| 28 |
|
| 29 |
+
The `problem` field of [competition_math dataset](https://huggingface.co/datasets/competition_math) was used for training and evaluation input data. The target data was taken from the `type` field.
|
| 30 |
|
| 31 |
## Training procedure
|
| 32 |
|
|
|
|
| 43 |
|
| 44 |
### Training results
|
| 45 |
|
| 46 |
+
This fine-tuned model achieves the following result on the problem type competition math test set:
|
| 47 |
+
```
|
| 48 |
+
precision recall f1-score support
|
| 49 |
+
|
| 50 |
+
Algebra 0.78 0.79 0.79 1187
|
| 51 |
+
Counting & Probability 0.75 0.81 0.78 474
|
| 52 |
+
Geometry 0.76 0.83 0.79 479
|
| 53 |
+
Intermediate Algebra 0.86 0.84 0.85 903
|
| 54 |
+
Number Theory 0.79 0.82 0.80 540
|
| 55 |
+
Prealgebra 0.66 0.61 0.63 871
|
| 56 |
+
Precalculus 0.95 0.89 0.92 546
|
| 57 |
+
|
| 58 |
+
accuracy 0.79 5000
|
| 59 |
+
macro avg 0.79 0.80 0.79 5000
|
| 60 |
+
weighted avg 0.79 0.79 0.79 5000
|
| 61 |
+
```
|
| 62 |
|
| 63 |
### Framework versions
|
| 64 |
|