--- license: llama3 library_name: transformers pipeline_tag: text-generation --- # Model Card for Model ID **Key Takeaways** πŸ’‘ **Systematic analysis on error types**: Categorizes common model-generated mathematical reasoning errors, revealing consistent error patterns across models and guiding targeted improvements. πŸ’‘ **Error-type grounded error augmentation**: Introduces diverse and meaningful errors by leveraging a teacher model to _intentionally inject representative mistakes_ with type sampled from the analyzed distribution, enhancing the model’s ability to learn from failures. πŸ’‘ **Two complementary self-correction mechanisms**: Combines _Fix & Continue_ (correcting mistakes within the original reasoning) and _Fresh & Restart_ (restarting the reasoning process from scratch) to generate effective revision trajectories. βœ… **LEMMA** – A novel framework that fine-tunes LLMs on error-corrective trajectories, enabling autonomous error detection and correction during mathematical reasoning. πŸ“Š **Result** – Up to 13.3% accuracy improvement for LLaMA3-8B with only 90k synthesized data. The LEMMA series models are trained on the [LEMMA Dataset](https://huggingface.co/datasets/panzs19/LEMMA). This dataset uses the training set of MATH and GSM8K to generate error-corrective reasoning trajectories. For each question in these datasets, the student model (LLaMA3-8B) generates self-generated errors, and the teacher model (GPT-4o) deliberately introduces errors based on the error type distribution of the student model. Then, both "Fix & Continue" and "Fresh & Restart" correction strategies are applied to these errors to create error-corrective revision trajectories. After filtering out trajectories with incorrect final answers, we obtain this dataset. Fine-tuning on this dataset achieves up to 13.3% average accuracy improvement for LLaMA3-8B with less than 90k synthesized data. For more details, please refer to our paper [LEMMA: Learning from Errors for MatheMatical Advancement in LLMs](https://arxiv.org/abs/2503.17439). ## Model Details ### Model Description - **Finetuned from model [optional]:** [Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) ### Model Sources [optional] - **Repository:** [https://github.com/pzs19/LEMMA/](https://github.com/pzs19/LEMMA/) - **Paper:** [https://arxiv.org/abs/2503.17439](https://arxiv.org/abs/2503.17439) ### Direct Use The same as Llama-3-70B. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training Details The LEMMA series models are trained on the [LEMMA Dataset](https://huggingface.co/datasets/panzs19/LEMMA) using [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). For more details, please refer to our [paper](https://arxiv.org/abs/2503.17439). ### Results | Model | Checkpoint | Paper | GSM8k | MATH | License | | ----- |------| ---- |------|-------| ----- | | LEMMA-LLAMA-3-8B | πŸ€— HF Link | πŸ“ƒ [LEMMA]| **79.2** | **38.3** | Llama 3 | | LEMMA-LLAMA-3-70B | πŸ€— HF Link | πŸ“ƒ [LEMMA]| **91.5** | **51.8** | Llama 3 | ## Citation [optional] Please cite the paper if you refer to our model, code, data or paper from MetaMath. ``` @article{LEMMA, title={LEMMA: Learning from Errors for MatheMatical Advancement in LLMs}, author={Zhuoshi Pan, Yu Li, Honglin Lin, Qizhi Pei, Zinan Tang, Wei Wu, Chenlin Ming, H. Vicky Zhao, Conghui He, Lijun Wu}, journal={arXiv preprint arXiv:2503.17439}, year={2025} } ```