QED-Nano GGUF Models
Model Generation Details
This model was generated using llama.cpp at commit 05fa625ea.
Click here to get info on choosing the right GGUF model format
QED-Nano
Table of Contents
Model Summary
QED-Nano is a 4B parameter model explicitly post-trained to strengthen its proof-writing capabilities. Despite its small size, QED-Nano achieves an impressive 40% score on the challenging IMO-ProofBench benchmark (+20% over the Qwen3 base model), matching the performance of GPT-OSS-120B from OpenAI. With an agent scaffold that scales inference-time compute to over 1M tokens per problem, QED-Nano approaches the performance of Gemini-3-Pro. Crucially, the same agentic scaffold on the base model (Qwen3-4B-Thinking-2507) barely improves performance.
QED-Nano is based on Qwen/Qwen3-4B-Thinking-2507, and was post-trained via a combination of supervised fine-tuning and reinforcement learning with a reasoning cache (to be able to train for continual improvement with our agentic scaffold at test time) on a mixture of Olympiads proof problems from various public sources.
For more details refer to our blog post: https://huggingface.co/spaces/lm-provers/qed-nano-blogpost
How to use
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "lm-provers/QED-Nano"
device = "cuda" # for GPU usage or "cpu" for CPU usage
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
).to(device)
# prepare the model input
prompt = "Generate a rigorous proof to the following question: is \sqrt{2} rational or irrational?"
messages_think = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages_think,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate the output
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
# Get and decode the output
output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
We recommend setting
temperature=0.6andtop_p=0.95in the sampling parameters.
vLLM and SGLang
You can use vLLM and SGLang to deploy the model in an API compatible with OpenAI format.
SGLang
python -m sglang.launch_server --model-path lm-provers/QED-Nano
vLLM
vllm serve lm-provers/QED-Nano
Evaluation
In this section, we report the evaluation results of QED-Nano on IMO-ProofBench, ProofBench, and IMO-AnswerBench. All evaluations except those on IMO-AnswerBench are reported as avg@3 unless stated otherwise.
| Model | IMO-ProofBench | ProofBench | IMO-AnswerBench |
|---|---|---|---|
| Qwen3-4B-Thinking-2507 | 20.4 (2.6) | 19.5 (0.9) | 55.8 |
| QED-Nano-SFT | 39.5 (2.9) | 33.3 (0.5) | 57.5 |
| QED-Nano | 40.0 (0.6) | 44.9 (3.4) | 67.5 |
| QED-Nano (Agent) | 54.0 (3.7) | 54.4 (2.4) | - |
| Qwen3-30B-A3B-Thinking-2507 | 27.6 (1.0) | 26.1 (2.4) | 67.0 |
| Qwen3-235B-A22B-Thinking-2507 | 34.1 (0.7) | 33.7 (1.1) | 70.5 |
| Nomos-1 | 40.3 (3.5) | 28.3 (3.9) | 49.0 |
| GPT-OSS-20B | 38.3 (1.2) | 38.4 (3.9) | 61.5 |
| GPT-OSS-120B | 43.1 (3.2) | 47.5 (1.7) | 70.5 |
| DeepSeek-Math-V2 | 57.9 (2.0) | 60.6 (0.1) | 75.8 |
| Gemini 3 Pro | 58.7 (2.9) | 66.7 (3.1) | 83.2 |
Training
Model
- Architecture: Transformer decoder
- Precision: bfloat16
- Base Model: lm-provers/QED-Nano-SFT
- Parameters: 4 billion
- Training Data: lm-provers/FineProofs-RL (5,227 problems)
Training Hyperparameters
- Optimization Steps: 150
- Number of prompts per batch: 64
- Number of rollouts per prompt: 16
- Global batch size: 1024
- Max Rollout Length: 49,152 tokens
- Learning Rate Schedule: Constant with a learning rate of 1e-6
- Sampling temperature: 0.8
- LLM grader: GPT-OSS-20B with medium reasoning effort and sampling at temperature 1.0
Software & hardware
- GPU topology (each node has 8xH100s): 7 generator nodes, 4 trainer nodes, 1 grader node.
- Training time: 4 days or 9,216 H100 hours
- Training and evaluation framework: CMU-AIRe/QED-Nano
Limitations
QED-Nano is a domain-specific model that is designed for one thing and one thing only: proving theorems. Using as a general assistant will likely produce nonsense outside of this domain. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
License
Acknowledgements
QED-Nano is a joint collaboration between the research teams at CMU, ETH Zurich, Numina, and Hugging Face. Below is a list of the individual contributors and their affiliations:
CMU
Amrith Setlur, Yuxiao Qu, Ian Wu, and Aviral Kumar
ETH Zurich
Jasper Dekoninck
Numina
Jia Li
Hugging Face
Edward Beeching and Lewis Tunstall
🚀 If you find these models useful
Help me test my AI-Powered Quantum Network Monitor Assistant with quantum-ready security checks:
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : Source Code Quantum Network Monitor. You will also find the code I use to quantize the models if you want to do it yourself GGUFModelBuilder
💬 How to test:
Choose an AI assistant type:
TurboLLM(GPT-4.1-mini)HugLLM(Hugginface Open-source models)TestLLM(Experimental CPU-only)
What I’m Testing
I’m pushing the limits of small open-source models for AI network monitoring, specifically:
- Function calling against live network services
- How small can a model go while still handling:
- Automated Nmap security scans
- Quantum-readiness checks
- Network Monitoring tasks
🟡 TestLLM – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- ✅ Zero-configuration setup
- ⏳ 30s load time (slow inference but no API costs) . No token limited as the cost is low.
- 🔧 Help wanted! If you’re into edge-device AI, let’s collaborate!
Other Assistants
🟢 TurboLLM – Uses gpt-4.1-mini :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- Create custom cmd processors to run .net code on Quantum Network Monitor Agents
- Real-time network diagnostics and monitoring
- Security Audits
- Penetration testing (Nmap/Metasploit)
🔵 HugLLM – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
💡 Example commands you could test:
"Give me info on my websites SSL certificate""Check if my server is using quantum safe encyption for communication""Run a comprehensive security audit on my server"- '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code on. This is a very flexible and powerful feature. Use with caution!
Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is open source. Feel free to use whatever you find helpful.
If you appreciate the work, please consider buying me a coffee ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
- Downloads last month
- 574

