panzs19 nielsr HF Staff commited on
Commit
09ce844
·
verified ·
1 Parent(s): 8ff8130

Add library name and pipeline tag (#1)

Browse files

- Add library name and pipeline tag (6d0120be4b0eb3f29c294d4d8904b0737a4b136d)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +16 -1
README.md CHANGED
@@ -1,8 +1,23 @@
1
  ---
2
  license: llama3
 
 
3
  ---
 
4
  # Model Card for Model ID
5
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  <!-- Provide a quick summary of what the model is/does. -->
7
 
8
  The LEMMA series models are trained on the [LEMMA Dataset](https://huggingface.co/datasets/panzs19/LEMMA). This dataset uses the training set of MATH and GSM8K to generate error-corrective reasoning trajectories. For each question in these datasets, the student model (LLaMA3-8B) generates self-generated errors, and the teacher model (GPT-4o) deliberately introduces errors based on the error type distribution of the student model. Then, both "Fix & Continue" and "Fresh & Restart" correction strategies are applied to these errors to create error-corrective revision trajectories. After filtering out trajectories with incorrect final answers, we obtain this dataset. Fine-tuning on this dataset achieves up to 13.3% average accuracy improvement for LLaMA3-8B with less than 90k synthesized data. For more details, please refer to our paper [LEMMA: Learning from Errors for MatheMatical Advancement in LLMs](https://arxiv.org/abs/2503.17439).
@@ -48,4 +63,4 @@ Please cite the paper if you refer to our model, code, data or paper from MetaMa
48
  journal={arXiv preprint arXiv:2503.17439},
49
  year={2025}
50
  }
51
- ```
 
1
  ---
2
  license: llama3
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
  ---
6
+
7
  # Model Card for Model ID
8
 
9
+ **Key Takeaways**
10
+
11
+ 💡 **Systematic analysis on error types**: Categorizes common model-generated mathematical reasoning errors, revealing consistent error patterns across models and guiding targeted improvements.
12
+
13
+ 💡 **Error-type grounded error augmentation**: Introduces diverse and meaningful errors by leveraging a teacher model to _intentionally inject representative mistakes_ with type sampled from the analyzed distribution, enhancing the model’s ability to learn from failures.
14
+
15
+ 💡 **Two complementary self-correction mechanisms**: Combines _Fix & Continue_ (correcting mistakes within the original reasoning) and _Fresh & Restart_ (restarting the reasoning process from scratch) to generate effective revision trajectories.
16
+
17
+ ✅ **LEMMA** – A novel framework that fine-tunes LLMs on error-corrective trajectories, enabling autonomous error detection and correction during mathematical reasoning.
18
+
19
+ 📊 **Result** – Up to 13.3% accuracy improvement for LLaMA3-8B with only 90k synthesized data.
20
+
21
  <!-- Provide a quick summary of what the model is/does. -->
22
 
23
  The LEMMA series models are trained on the [LEMMA Dataset](https://huggingface.co/datasets/panzs19/LEMMA). This dataset uses the training set of MATH and GSM8K to generate error-corrective reasoning trajectories. For each question in these datasets, the student model (LLaMA3-8B) generates self-generated errors, and the teacher model (GPT-4o) deliberately introduces errors based on the error type distribution of the student model. Then, both "Fix & Continue" and "Fresh & Restart" correction strategies are applied to these errors to create error-corrective revision trajectories. After filtering out trajectories with incorrect final answers, we obtain this dataset. Fine-tuning on this dataset achieves up to 13.3% average accuracy improvement for LLaMA3-8B with less than 90k synthesized data. For more details, please refer to our paper [LEMMA: Learning from Errors for MatheMatical Advancement in LLMs](https://arxiv.org/abs/2503.17439).
 
63
  journal={arXiv preprint arXiv:2503.17439},
64
  year={2025}
65
  }
66
+ ```