RichardErkhov commited on
Commit
359203a
·
verified ·
1 Parent(s): 6cd26b5

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +114 -0
README.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ MathCoder-L-13B - GGUF
11
+ - Model creator: https://huggingface.co/MathLLMs/
12
+ - Original model: https://huggingface.co/MathLLMs/MathCoder-L-13B/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [MathCoder-L-13B.Q2_K.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q2_K.gguf) | Q2_K | 4.52GB |
18
+ | [MathCoder-L-13B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
19
+ | [MathCoder-L-13B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.IQ3_S.gguf) | IQ3_S | 5.27GB |
20
+ | [MathCoder-L-13B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
21
+ | [MathCoder-L-13B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.IQ3_M.gguf) | IQ3_M | 5.57GB |
22
+ | [MathCoder-L-13B.Q3_K.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q3_K.gguf) | Q3_K | 5.9GB |
23
+ | [MathCoder-L-13B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
24
+ | [MathCoder-L-13B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
25
+ | [MathCoder-L-13B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
26
+ | [MathCoder-L-13B.Q4_0.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q4_0.gguf) | Q4_0 | 6.86GB |
27
+ | [MathCoder-L-13B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
28
+ | [MathCoder-L-13B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
29
+ | [MathCoder-L-13B.Q4_K.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q4_K.gguf) | Q4_K | 7.33GB |
30
+ | [MathCoder-L-13B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
31
+ | [MathCoder-L-13B.Q4_1.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q4_1.gguf) | Q4_1 | 7.61GB |
32
+ | [MathCoder-L-13B.Q5_0.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q5_0.gguf) | Q5_0 | 8.36GB |
33
+ | [MathCoder-L-13B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
34
+ | [MathCoder-L-13B.Q5_K.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q5_K.gguf) | Q5_K | 8.6GB |
35
+ | [MathCoder-L-13B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
36
+ | [MathCoder-L-13B.Q5_1.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q5_1.gguf) | Q5_1 | 9.1GB |
37
+ | [MathCoder-L-13B.Q6_K.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q6_K.gguf) | Q6_K | 9.95GB |
38
+ | [MathCoder-L-13B.Q8_0.gguf](https://huggingface.co/RichardErkhov/MathLLMs_-_MathCoder-L-13B-gguf/blob/main/MathCoder-L-13B.Q8_0.gguf) | Q8_0 | 12.88GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: apache-2.0
46
+ language:
47
+ - en
48
+ metrics:
49
+ - accuracy
50
+ pipeline_tag: text-generation
51
+ ---
52
+ # MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning
53
+
54
+ Paper: [https://arxiv.org/pdf/2310.03731.pdf](https://arxiv.org/pdf/2310.03731.pdf)
55
+
56
+ Repo: [https://github.com/mathllm/MathCoder](https://github.com/mathllm/MathCoder)
57
+
58
+
59
+ ## Introduction
60
+ We introduce MathCoder, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving.
61
+
62
+ | Base Model: Llama-2 | Base Model: Code Llama |
63
+ |-------------------------------------------------------------------|-----------------------------------------------------------------------|
64
+ | [MathCoder-L-7B](https://huggingface.co/MathLLM/MathCoder-L-7B) | [MathCoder-CL-7B](https://huggingface.co/MathLLM/MathCoder-CL-7B) |
65
+ | [MathCoder-L-13B](https://huggingface.co/MathLLM/MathCoder-L-13B) | [MathCoder-CL-34B](https://huggingface.co/MathLLM/MathCoder-CL-34B) |
66
+
67
+
68
+ ## Training Data
69
+ The models are trained on the [MathCodeInstruct](https://huggingface.co/datasets/MathLLM/MathCodeInstruct) Dataset.
70
+
71
+
72
+ ## Training Procedure
73
+ The models are fine-tuned with the MathCodeInstruct dataset using the original Llama-2 and CodeLlama models as base models. Check out our paper and repo for more details.
74
+
75
+ ## Evaluation
76
+
77
+ <br>
78
+ <div align="center">
79
+ <img src="result.png" width="100%" title="Result Figure">
80
+ </div>
81
+
82
+
83
+
84
+ ## Usage
85
+ You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
86
+ Check our Github repo for datails.
87
+
88
+
89
+ ## **Citation**
90
+
91
+ Please cite the paper if you use our data, model or code. Please also kindly cite the original dataset papers.
92
+
93
+ ```
94
+ @inproceedings{
95
+ wang2024mathcoder,
96
+ title={MathCoder: Seamless Code Integration in {LLM}s for Enhanced Mathematical Reasoning},
97
+ author={Ke Wang and Houxing Ren and Aojun Zhou and Zimu Lu and Sichun Luo and Weikang Shi and Renrui Zhang and Linqi Song and Mingjie Zhan and Hongsheng Li},
98
+ booktitle={The Twelfth International Conference on Learning Representations},
99
+ year={2024},
100
+ url={https://openreview.net/forum?id=z8TW0ttBPp}
101
+ }
102
+ ```
103
+
104
+ ```
105
+ @inproceedings{
106
+ zhou2024solving,
107
+ title={Solving Challenging Math Word Problems Using {GPT}-4 Code Interpreter with Code-based Self-Verification},
108
+ author={Aojun Zhou and Ke Wang and Zimu Lu and Weikang Shi and Sichun Luo and Zipeng Qin and Shaoqing Lu and Anya Jia and Linqi Song and Mingjie Zhan and Hongsheng Li},
109
+ booktitle={The Twelfth International Conference on Learning Representations},
110
+ year={2024},
111
+ url={https://openreview.net/forum?id=c8McWs4Av0}
112
+ }
113
+ ```
114
+