Update README.md
Browse files
README.md
CHANGED
|
@@ -57,6 +57,9 @@ MAMUT-BERT is intended for downstream tasks that require improved mathematical u
|
|
| 57 |
|
| 58 |
**Note: This model was saved without the MLM or NSP heads and requires fine-tuning before use in downstream tasks.**
|
| 59 |
|
|
|
|
|
|
|
|
|
|
| 60 |
## Training Details
|
| 61 |
|
| 62 |
Training configurations are described in [Appendix C of the MAMUT paper](https://arxiv.org/abs/2502.20855).
|
|
|
|
| 57 |
|
| 58 |
**Note: This model was saved without the MLM or NSP heads and requires fine-tuning before use in downstream tasks.**
|
| 59 |
|
| 60 |
+
Similarly trained models are [MAMUT-MathBERT based on `tbs17/MathBERT`](https://huggingface.co/aieng-lab/MathBERT-mamut) and [MAMUT-MPBERT based on `AnReu/math_structure_bert`](https://huggingface.co/ddrg/math_structure_bert) (best of the three models according to our evaluation).
|
| 61 |
+
|
| 62 |
+
|
| 63 |
## Training Details
|
| 64 |
|
| 65 |
Training configurations are described in [Appendix C of the MAMUT paper](https://arxiv.org/abs/2502.20855).
|