Update README.md
Browse files
README.md
CHANGED
|
@@ -22,7 +22,7 @@ base_model:
|
|
| 22 |
|
| 23 |
## Model Summary
|
| 24 |
|
| 25 |
-
QED-Nano is a 4B parameter model explicitly post-trained to strengthen its proof-writing capabilities. Despite its small size, QED-Nano achieves an impressive 40% score on the challenging IMO-ProofBench benchmark (+20% over the Qwen3 base model), matching the performance of [GPT-OSS-120B](https://huggingface.co/openai/gpt-oss-120b) from OpenAI. With an agent scaffold that scales inference-time compute to over 1M tokens per problem, QED-Nano approaches the performance of Gemini-3-Pro
|
| 26 |
|
| 27 |

|
| 28 |
|
|
|
|
| 22 |
|
| 23 |
## Model Summary
|
| 24 |
|
| 25 |
+
QED-Nano is a 4B parameter model explicitly post-trained to strengthen its proof-writing capabilities. Despite its small size, QED-Nano achieves an impressive 40% score on the challenging IMO-ProofBench benchmark (+20% over the Qwen3 base model), matching the performance of [GPT-OSS-120B](https://huggingface.co/openai/gpt-oss-120b) from OpenAI. With an agent scaffold that scales inference-time compute to over 1M tokens per problem, QED-Nano approaches the performance of Gemini-3-Pro. Crucially, the same agentic scaffold on the base model (Qwen3-4B-Thinking-2507) barely improves performance.
|
| 26 |
|
| 27 |

|
| 28 |
|