How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf DavidOKB/MathThink-Qwen-3.5-4B:Q8_0# Run inference directly in the terminal:
llama-cli -hf DavidOKB/MathThink-Qwen-3.5-4B:Q8_0Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf DavidOKB/MathThink-Qwen-3.5-4B:Q8_0# Run inference directly in the terminal:
./llama-cli -hf DavidOKB/MathThink-Qwen-3.5-4B:Q8_0Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf DavidOKB/MathThink-Qwen-3.5-4B:Q8_0# Run inference directly in the terminal:
./build/bin/llama-cli -hf DavidOKB/MathThink-Qwen-3.5-4B:Q8_0Use Docker
docker model run hf.co/DavidOKB/MathThink-Qwen-3.5-4B:Q8_0Quick Links
Qwen3.5-4B Math Fine-Tuned (Nemotron-SFT-Math-v3)
This model is a fine-tuned version of Qwen3.5-4B, explicitly optimized for complex mathematical reasoning and Chain-of-Thought (CoT) problem solving. It was fine-tuned using the Nemotron-Math-v3 dataset with Parameter-Efficient Fine-Tuning (PEFT/LoRA).
Model Details
- Base Model:
Qwen/Qwen3.5-4B - Fine-Tuning Dataset:
nvidia/Nemotron-SFT-Math-v3 - Methodology: LoRA (Rank = 64, Alpha = 32 or Alpha = 16). The
lora_alphascaling is specifically tuned to prevent catastrophic forgetting, ensuring the model retains conversational abilities while significantly enhancing mathematical logic. - Quantization: Safetensor format (
F16) and GGUF formats (Q8_0)
Recommended Generation Parameters
Because this model leverages extensive Chain-of-Thought reasoning to solve math problems, the following generation parameters are highly recommended for the best performance:
{
"temperature": 1.0,
"top_p": 0.95,
"repetition_penalty": 1.1
}
Note: A repetition_penalty of 1.1 is crucial to prevent the base model from occasionally falling into infinite generation loops on extremely long context windows.
Use Cases
- Resolving complex math word problems (GSM8K).
- Higher-level mathematical reasoning (MATH, AIME).
- Step-by-step logic tracking and proofs.
- Downloads last month
- 71
Hardware compatibility
Log In to add your hardware
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf DavidOKB/MathThink-Qwen-3.5-4B:Q8_0# Run inference directly in the terminal: llama-cli -hf DavidOKB/MathThink-Qwen-3.5-4B:Q8_0