Add question-answering pipeline tag and link to code
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,19 +1,21 @@
|
|
| 1 |
-
|
| 2 |
---
|
| 3 |
-
|
| 4 |
-
|
| 5 |
language:
|
| 6 |
- en
|
|
|
|
|
|
|
| 7 |
metrics:
|
| 8 |
- accuracy
|
| 9 |
-
|
| 10 |
-
- Qwen/Qwen2.5-Math-7B-Instruct
|
| 11 |
-
library_name: transformers
|
| 12 |
-
|
| 13 |
---
|
| 14 |
|
| 15 |
[](https://hf.co/QuantFactory)
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
# QuantFactory/SuperCorrect-7B-GGUF
|
| 19 |
This is quantized version of [BitStarWalkin/SuperCorrect-7B](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) created using llama.cpp
|
|
@@ -140,4 +142,4 @@ title={SuperCorrect: Supervising and Correcting Language Models with Error-Drive
|
|
| 140 |
|
| 141 |
## Acknowledgements
|
| 142 |
|
| 143 |
-
Our SuperCorrect is a two-stage fine-tuning model which based on several extraordinary open-source models like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math), [DeepSeek-Math](https://github.com/deepseek-ai/DeepSeek-Math), [Llama3-Series](https://github.com/meta-llama/llama3). Our evaluation method is based on the code base of outstanding works like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math) and [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). We also want to express our gratitude for amazing works such as [BoT](https://github.com/YangLing0818/buffer-of-thought-llm) which provides the idea of thought template.
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- Qwen/Qwen2.5-Math-7B-Instruct
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
+
library_name: transformers
|
| 7 |
+
license: apache-2.0
|
| 8 |
metrics:
|
| 9 |
- accuracy
|
| 10 |
+
pipeline_tag: question-answering
|
|
|
|
|
|
|
|
|
|
| 11 |
---
|
| 12 |
|
| 13 |
[](https://hf.co/QuantFactory)
|
| 14 |
|
| 15 |
+
This repository contains a quantized version of the model presented in [SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights](https://huggingface.co/papers/2410.09008).
|
| 16 |
+
The original model card can be found [here](https://huggingface.co/BitStarWalkin/SuperCorrect-7B).
|
| 17 |
+
|
| 18 |
+
Code: https://github.com/YangLing0818/SuperCorrect-llm
|
| 19 |
|
| 20 |
# QuantFactory/SuperCorrect-7B-GGUF
|
| 21 |
This is quantized version of [BitStarWalkin/SuperCorrect-7B](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) created using llama.cpp
|
|
|
|
| 142 |
|
| 143 |
## Acknowledgements
|
| 144 |
|
| 145 |
+
Our SuperCorrect is a two-stage fine-tuning model which based on several extraordinary open-source models like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math), [DeepSeek-Math](https://github.com/deepseek-ai/DeepSeek-Math), [Llama3-Series](https://github.com/meta-llama/llama3). Our evaluation method is based on the code base of outstanding works like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math) and [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). We also want to express our gratitude for amazing works such as [BoT](https://github.com/YangLing0818/buffer-of-thought-llm) which provides the idea of thought template.
|