AceMath: Advancing Frontier Math Reasoning with Post-Training and Reward Modeling
Paper • 2412.15084 • Published • 13
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf redponike/AceMath-72B-Instruct-GGUF:# Run inference directly in the terminal:
llama-cli -hf redponike/AceMath-72B-Instruct-GGUF:# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf redponike/AceMath-72B-Instruct-GGUF:# Run inference directly in the terminal:
./llama-cli -hf redponike/AceMath-72B-Instruct-GGUF:git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf redponike/AceMath-72B-Instruct-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf redponike/AceMath-72B-Instruct-GGUF:docker model run hf.co/redponike/AceMath-72B-Instruct-GGUF:GGUF quants of nvidia/AceMath-72B-Instruct
Using llama.cpp b4682 (commit 0893e0114e934bdd0eba0ff69d9ef8c59343cbc3)
The importance matrix was generated with groups_merged-enhancedV3.txt by InferenceIllusionist (later renamed calibration_datav3.txt), an edited version of kalomaze's original groups_merged.txt.
All quants were generated/calibrated with the imatrix, including the K quants.
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf redponike/AceMath-72B-Instruct-GGUF:# Run inference directly in the terminal: llama-cli -hf redponike/AceMath-72B-Instruct-GGUF: