Update README.md
Browse files
README.md
CHANGED
|
@@ -6,8 +6,8 @@
|
|
| 6 |
> Peking University, Skywork AI, UC Berkeley, Stanford University
|
| 7 |
|
| 8 |
<p align="left">
|
| 9 |
-
<a href='https://arxiv.org/abs/
|
| 10 |
-
<img src='https://img.shields.io/badge/Arxiv-2410.
|
| 11 |
<a href='https://huggingface.co/BitStarWalkin/SuperCorrect-7B'>
|
| 12 |
<img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-yellow'></a>
|
| 13 |
</p>
|
|
@@ -20,7 +20,7 @@ This repo provides the official implementation of **SuperCorrect** a novel two-
|
|
| 20 |
|
| 21 |
Notably, our **SupperCorrect-7B** model significantly surpasses powerful **DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks**, achieving new SOTA performance among all 7B models.
|
| 22 |
|
| 23 |
-
Detailed performance and introduction are shown in our <a href="https://arxiv.org/"> 📑 Paper</a>.
|
| 24 |
|
| 25 |
<div align="left">
|
| 26 |
🚨 Unlike other LLMs, we incorporate LLMs with our pre-defined hierarchical thought template ([Buffer of Thought (BoT)](https://github.com/YangLing0818/buffer-of-thought-llm)) to conduct more deliberate reasoning than conventional CoT. It should be noted that our evaluation methods relies on pure mathematical reasoning abilities of LLMs, instead of leverage other programming methods such as PoT and ToRA.
|
|
@@ -32,7 +32,7 @@ Detailed performance and introduction are shown in our <a href="https://arxiv.or
|
|
| 32 |
|
| 33 |
<div align="left">
|
| 34 |
<b>
|
| 35 |
-
🚨 For more concise and clear presentation, we omit some XML tags
|
| 36 |
</b>
|
| 37 |
</div>
|
| 38 |
|
|
@@ -106,9 +106,11 @@ We evaluate our SupperCorrect-7B on two widely used English math benchmarks GSM8
|
|
| 106 |
## Citation
|
| 107 |
|
| 108 |
```bash
|
| 109 |
-
@article{
|
| 110 |
title={SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights}
|
| 111 |
-
|
|
|
|
|
|
|
| 112 |
}
|
| 113 |
@article{yang2024buffer,
|
| 114 |
title={Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models},
|
|
|
|
| 6 |
> Peking University, Skywork AI, UC Berkeley, Stanford University
|
| 7 |
|
| 8 |
<p align="left">
|
| 9 |
+
<a href='https://arxiv.org/abs/2410.09008'>
|
| 10 |
+
<img src='https://img.shields.io/badge/Arxiv-2410.09008-A42C25?style=flat&logo=arXiv&logoColor=A42C25'></a>
|
| 11 |
<a href='https://huggingface.co/BitStarWalkin/SuperCorrect-7B'>
|
| 12 |
<img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-yellow'></a>
|
| 13 |
</p>
|
|
|
|
| 20 |
|
| 21 |
Notably, our **SupperCorrect-7B** model significantly surpasses powerful **DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks**, achieving new SOTA performance among all 7B models.
|
| 22 |
|
| 23 |
+
Detailed performance and introduction are shown in our <a href="https://arxiv.org/abs/2410.09008"> 📑 Paper</a>.
|
| 24 |
|
| 25 |
<div align="left">
|
| 26 |
🚨 Unlike other LLMs, we incorporate LLMs with our pre-defined hierarchical thought template ([Buffer of Thought (BoT)](https://github.com/YangLing0818/buffer-of-thought-llm)) to conduct more deliberate reasoning than conventional CoT. It should be noted that our evaluation methods relies on pure mathematical reasoning abilities of LLMs, instead of leverage other programming methods such as PoT and ToRA.
|
|
|
|
| 32 |
|
| 33 |
<div align="left">
|
| 34 |
<b>
|
| 35 |
+
🚨 For more concise and clear presentation, we omit some XML tags.
|
| 36 |
</b>
|
| 37 |
</div>
|
| 38 |
|
|
|
|
| 106 |
## Citation
|
| 107 |
|
| 108 |
```bash
|
| 109 |
+
@article{yang2024supercorrect,
|
| 110 |
title={SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights}
|
| 111 |
+
author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Xu, Minkai and Gonzalez, Joseph E and Cui, Bin and Yan, Shuicheng},
|
| 112 |
+
journal={arXiv preprint arXiv:2410.09008},
|
| 113 |
+
year={2024}
|
| 114 |
}
|
| 115 |
@article{yang2024buffer,
|
| 116 |
title={Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models},
|