How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "OFA-Sys/MuggleMath_7B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "OFA-Sys/MuggleMath_7B",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/OFA-Sys/MuggleMath_7B
Quick Links

see our paper in: https://arxiv.org/abs/2310.05506

Model Details

MuggleMATH is fully fine-tuned on the AugGSM8K and AugMATH datasets and based on the LLaMA-2 Models.

Model Usage

prompting template: ''' "Below is an instruction that describes a task. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Response:" ''' We recommend using vllm to accelerate inference.

Experiment

GSM8K MATH
MuggleMATH-7B 69.8 25.8
MuggleMATH-13B 74.3 30.7
MuggleMATH-70B 82.5 42.1

Citation

@misc{li2023query, title={Query and Response Augmentation Cannot Help Out-of-domain Math Reasoning Generalization}, author={Chengpeng Li and Zheng Yuan and Hongyi Yuan and Guanting Dong and Keming Lu and Jiancan Wu and Chuanqi Tan and Xiang Wang and Chang Zhou}, journal={arXiv preprint arXiv: 2310.05506}, year={2023} }

Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for OFA-Sys/MuggleMath_7B