AceMath: Advancing Frontier Math Reasoning with Post-Training and Reward Modeling
Paper • 2412.15084 • Published • 13
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)GGUF quants of nvidia/AceMath-72B-Instruct
Using llama.cpp b4682 (commit 0893e0114e934bdd0eba0ff69d9ef8c59343cbc3)
The importance matrix was generated with groups_merged-enhancedV3.txt by InferenceIllusionist (later renamed calibration_datav3.txt), an edited version of kalomaze's original groups_merged.txt.
All quants were generated/calibrated with the imatrix, including the K quants.
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="redponike/AceMath-72B-Instruct-GGUF", filename="", )