Quantizations of https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf

From original readme

OpenMath models were designed to solve mathematical problems by integrating text-based reasoning with code blocks executed by Python interpreter. The models were trained on OpenMathInstruct-1, a math instruction tuning dataset with 1.8M problem-solution pairs generated using permissively licensed Mixtral-8x7B model.

How to use the models?

Try to run inference with our models with just a few commands!

Downloads last month
23
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support