Update README.md
Browse files
README.md
CHANGED
|
@@ -23,7 +23,7 @@ This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggin
|
|
| 23 |
[OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset.
|
| 24 |
|
| 25 |
The [OpenThinker2-32B](https://huggingface.co/open-thoughts/OpenThinker2-32B) model is the highest performing open-data model.
|
| 26 |
-
This model improves upon our previous [OpenThinker-
|
| 27 |
The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
|
| 28 |
|
| 29 |
| Model | Open Data? | Avg | AIME24 | AIME25 | AMC23 | MATH500 | GPQA-D | LCBv2 |
|
|
|
|
| 23 |
[OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset.
|
| 24 |
|
| 25 |
The [OpenThinker2-32B](https://huggingface.co/open-thoughts/OpenThinker2-32B) model is the highest performing open-data model.
|
| 26 |
+
This model improves upon our previous [OpenThinker-32B](https://huggingface.co/open-thoughts/OpenThinker-32B) model, which was trained on 114k examples from [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k).
|
| 27 |
The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
|
| 28 |
|
| 29 |
| Model | Open Data? | Avg | AIME24 | AIME25 | AMC23 | MATH500 | GPQA-D | LCBv2 |
|