Instructions to use QuantFactory/NuminaMath-7B-TIR-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use QuantFactory/NuminaMath-7B-TIR-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/NuminaMath-7B-TIR-GGUF", filename="NuminaMath-7B-TIR.Q2_K.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/NuminaMath-7B-TIR-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use QuantFactory/NuminaMath-7B-TIR-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "QuantFactory/NuminaMath-7B-TIR-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/NuminaMath-7B-TIR-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M
- Ollama
How to use QuantFactory/NuminaMath-7B-TIR-GGUF with Ollama:
ollama run hf.co/QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/NuminaMath-7B-TIR-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/NuminaMath-7B-TIR-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/NuminaMath-7B-TIR-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/NuminaMath-7B-TIR-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/NuminaMath-7B-TIR-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/NuminaMath-7B-TIR-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/NuminaMath-7B-TIR-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.NuminaMath-7B-TIR-GGUF-Q4_K_M
List all available models
lemonade list
QuantFactory/NuminaMath-7B-TIR-GGUF
This is quantized version of AI-MO/NuminaMath-7B-TIR created using llama.cpp
Model Card for NuminaMath 7B TIR
NuminaMath is a series of language models that are trained to solve math problems using tool-integrated reasoning (TIR). NuminaMath 7B TIR won the first progress prize of the AI Math Olympiad (AIMO), with a score of 29/50 on the public and private tests sets.
This model is a fine-tuned version of deepseek-ai/deepseek-math-7b-base with two stages of supervised fine-tuning:
- Stage 1: fine-tune the base model on a large, diverse dataset of natural language math problems and solutions, where each solution is templated with Chain of Thought (CoT) to facilitate reasoning.
- Stage 2: fine-tune the model from Stage 1 on a synthetic dataset of tool-integrated reasoning, where each math problem is decomposed into a sequence of rationales, Python programs, and their outputs. Here we followed Microsoft’s ToRA paper and prompted GPT-4 to produce solutions in the ToRA format with code execution feedback. Fine-tuning on this data produces a reasoning agent that can solve mathematical problems via a mix of natural language reasoning and use of the Python REPL to compute intermediate results.
Model description
- Model type: A 7B parameter math LLM fine-tuned in two stages of supervised fine-tuning, first on a dataset with math problem-solution pairs and then on a synthetic dataset with examples of multi-step generations using tool-integrated reasoning.
- Language(s) (NLP): Primarily English
- License: Apache 2.0
- Finetuned from model: deepseek-ai/deepseek-math-7b-base
Model Sources
- Repository: Coming soon!
- Demo: https://huggingface.co/spaces/AI-MO/math-olympiad-solver
Intended uses & limitations
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
import re
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="AI-MO/NuminaMath-7B-TIR", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "user", "content": "For how many values of the constant $k$ will the polynomial $x^{2}+kx+36$ have two distinct integer roots?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
gen_config = {
"max_new_tokens": 1024,
"do_sample": False,
"stop_strings": ["```output"], # Generate until Python code block is complete
"tokenizer": pipe.tokenizer,
}
outputs = pipe(prompt, **gen_config)
text = outputs[0]["generated_text"]
print(text)
# WARNING: This code will execute the python code in the string. We show this for eductional purposes only.
# Please refer to our full pipeline for a safer way to execute code.
python_code = re.findall(r"```python(.*?)```", text, re.DOTALL)[0]
exec(python_code)
The above executes a single step of Python code - for more complex problems, you will want to run the logic for several steps to obtain the final solution.
Bias, Risks, and Limitations
NuminaMath 7B TIR was created to solve problems in the narrow domain of competition-level mathematics. As a result, the model should not be used for general chat applications. With greedy decoding, we find the model is capable of solving problems at the level of AMC 12, but often struggles generate a valid solution on harder problems at the AIME and Math Olympiad level. The model also struggles to solve geometry problems, likely due to it's limited capacity and lack of other modalities like vision.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4.0
Framework versions
- Transformers 4.40.1
- Pytorch 2.3.1
- Datasets 2.18.0
- Tokenizers 0.19.1
Model Citation
If you find NuminaMath 7B TIR is useful in your work, please cite it with:
@misc{numina_math_7b,
author = {Edward Beeching and Shengyi Costa Huang and Albert Jiang and Jia Li and Benjamin Lipkin and Zihan Qina and Kashif Rasul and Ziju Shen and Roman Soletskyi and Lewis Tunstall},
title = {NuminaMath 7B TIR},
year = {2024},
publisher = {Numina & Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/AI-MO/NuminaMath-7B-TIR}}
}
- Downloads last month
- 190
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for QuantFactory/NuminaMath-7B-TIR-GGUF
Base model
deepseek-ai/deepseek-math-7b-base