| | --- |
| | license: apache-2.0 |
| | datasets: |
| | - ddrg/math_text |
| | - ddrg/math_formulas |
| | - ddrg/named_math_formulas |
| | - ddrg/math_formula_retrieval |
| | language: |
| | - en |
| | base_model: |
| | - AnReu/math_pretrained_bert |
| | --- |
| | |
| |
|
| | # MAMUT-MPBert (Math Mutator Math-Pretrained-BERT) |
| |
|
| | <!-- Provide a quick summary of what the model is/does. --> |
| |
|
| | MAMUT-MPBERT is a pretrained language model based on [AnReu/math_pretrained_bert](https://huggingface.co/AnReu/math_pretrained_bert), further pretrained on mathematical texts and formulas. |
| | It was introduced in [MAMUT: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training](https://arxiv.org/abs/2502.20855). |
| |
|
| | Despite its base model is already a mathematical model, our training aims to improve the mathematical understanding even further, as shown in our paper. |
| |
|
| |
|
| | ## Model Details |
| |
|
| | ### Overview |
| |
|
| | MAMUT-MPBERT was pretrained on four math-specific tasks across four datasets. |
| |
|
| | - **[Mathematical Formulas (MF)](https://huggingface.co/datasets/ddrg/math_formulas):** A Masked Language Modeling (MLM) task on math formulas written in LaTeX. |
| | - **[Mathematical Texts (MT)](https://huggingface.co/datasets/ddrg/math_text):** An MLM task on natural language text containing inline LaTeX math (*mathematical texts*). The masking probability was biased toward mathematical tokens (inside math environment $...$) and domain-specific terms (e.g., *sum*, *one*, ...) |
| | - **[Named Math Formulas (NMF)](https://huggingface.co/datasets/ddrg/named_math_formulas):** A Next-Sentence-Prediction (NSP)-style task: given a formula and the name of a mathematical identity (e.g., Pythagorean Theorem), classify whether they match. |
| | - **[Math Formula Retrieval (MFR)](https://huggingface.co/datasets/ddrg/math_formula_retrieval):** Another NSP-style task to decide if two formulas describe the same mathematical identity or concept. |
| |
|
| |  |
| |
|
| |
|
| | ### Model Sources |
| |
|
| | <!-- Provide the basic links for the model. --> |
| |
|
| | - **Base Model:** [AnReu/math_pretrained_bert](https://huggingface.co/AnReu/math_pretrained_bert) (whose base model is [bert-base-cased](https://huggingface.co/google-bert/bert-base-cased)) |
| | - **Pretraining Code:** [aieng-lab/transformer-math-pretraining](https://github.com/aieng-lab/transformer-math-pretraining) |
| | - **MAMUT Repository:** [aieng-lab/math-mutator](https://github.com/aieng-lab/math-mutator) |
| | - **Paper:** [MAMUT: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training](https://arxiv.org/abs/2502.20855) |
| |
|
| | ## Uses |
| |
|
| | <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
| |
|
| | MAMUT-MPBERT is intended for downstream tasks that require improved mathematical understanding, such as: |
| |
|
| | - Formula classification |
| | - Retrieval of *semantically* similar formulas |
| | - Math-related question answering |
| |
|
| | **Note: This model was saved without the MLM or NSP heads and requires fine-tuning before use in downstream tasks.** |
| |
|
| | Similarly trained models are [MAMUT-BERT based on `bert-base-cased`](https://huggingface.co/aieng-lab/bert-base-cased-mamut) and [MAMUT-MathBERT based on `tbs17/MathBERT`](https://huggingface.co/tbs17/MathBERT). However, this model (MAMUT-MPBert) performed best according to our evalution. |
| |
|
| | ## Training Details |
| |
|
| | Training configurations are described in [Appendix C of the MAMUT paper](https://arxiv.org/abs/2502.20855). |
| |
|
| |
|
| | ## Evaluation |
| |
|
| | <!-- This section describes the evaluation protocols and provides the results. --> |
| |
|
| | The model is evaluated in [Section 7 and Appendix C.4 of the MAMUT paper](https://arxiv.org/abs/2502.20855) (MAMUT-MPBERT). |
| |
|
| |
|
| | ## Environmental Impact |
| |
|
| | <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
| |
|
| | - **Hardware Type:** 8xA100 |
| | - **Hours used:** 48 |
| | - **Compute Region:** Germany |
| |
|
| |
|
| | ## Citation |
| |
|
| | <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
| |
|
| | **BibTeX:** |
| |
|
| | ```bibtex |
| | @article{ |
| | drechsel2025mamut, |
| | title={{MAMUT}: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training}, |
| | author={Jonathan Drechsel and Anja Reusch and Steffen Herbold}, |
| | journal={Transactions on Machine Learning Research}, |
| | issn={2835-8856}, |
| | year={2025}, |
| | url={https://openreview.net/forum?id=khODmRpQEx} |
| | } |
| | ``` |