MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code
Paper • 2410.08196 • Published • 48
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf QuantFactory/MathCoder2-CodeLlama-7B-GGUF:# Run inference directly in the terminal:
llama-cli -hf QuantFactory/MathCoder2-CodeLlama-7B-GGUF:# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf QuantFactory/MathCoder2-CodeLlama-7B-GGUF:# Run inference directly in the terminal:
./llama-cli -hf QuantFactory/MathCoder2-CodeLlama-7B-GGUF:git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf QuantFactory/MathCoder2-CodeLlama-7B-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf QuantFactory/MathCoder2-CodeLlama-7B-GGUF:docker model run hf.co/QuantFactory/MathCoder2-CodeLlama-7B-GGUF:This is quantized version of MathGenie/MathCoder2-CodeLlama-7B created using llama.cpp
The MathCoder2 models are created by conducting continued pretraining on MathCode-Pile. They are introduced in the paper MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code.
The mathematical pretraining dataset includes mathematical code accompanied with natural language reasoning steps, making it a superior resource for models aimed at performing advanced mathematical reasoning tasks.
If you find this repository helpful, please consider citing our papers:
@misc{lu2024mathcoder2bettermathreasoning,
title={MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code},
author={Zimu Lu and Aojun Zhou and Ke Wang and Houxing Ren and Weikang Shi and Junting Pan and Mingjie Zhan and Hongsheng Li},
year={2024},
eprint={2410.08196},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.08196},
}
@inproceedings{
wang2024mathcoder,
title={MathCoder: Seamless Code Integration in {LLM}s for Enhanced Mathematical Reasoning},
author={Zimu Lu and Aojun Zhou and Zimu Lu and Sichun Luo and Weikang Shi and Renrui Zhang and Linqi Song and Mingjie Zhan and Hongsheng Li},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=z8TW0ttBPp}
}
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Base model
codellama/CodeLlama-7b-hf
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/MathCoder2-CodeLlama-7B-GGUF:# Run inference directly in the terminal: llama-cli -hf QuantFactory/MathCoder2-CodeLlama-7B-GGUF: