| --- |
| base_model: Qwen/Qwen3-1.7B-Base |
| datasets: open-r1/Mixture-of-Thoughts |
| library_name: transformers |
| model_name: OpenR1-Distill-Qwen3-1.7B-Math |
| tags: |
| - generated_from_trainer |
| - open-r1 |
| - trl |
| - sft |
| licence: license |
| --- |
| |
| # Model Card for OpenR1-Distill-Qwen3-1.7B-Math |
|
|
| This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base) on the [open-r1/Mixture-of-Thoughts](https://huggingface.co/datasets/open-r1/Mixture-of-Thoughts) dataset. |
| It has been trained using [TRL](https://github.com/huggingface/trl). |
|
|
| ## Quick start |
|
|
| ```python |
| from transformers import pipeline |
| |
| question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" |
| generator = pipeline("text-generation", model="teetone/OpenR1-Distill-Qwen3-1.7B-Math", device="cuda") |
| output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] |
| print(output["generated_text"]) |
| ``` |
|
|
| ## Training procedure |
|
|
| [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/stanford-mercury/reasoning/runs/eexdjieo) |
|
|
|
|
| This model was trained with SFT. |
|
|
| ### Framework versions |
|
|
| - TRL: 0.18.0 |
| - Transformers: 4.57.5 |
| - Pytorch: 2.6.0 |
| - Datasets: 4.4.2 |
| - Tokenizers: 0.22.2 |
|
|
| ## Citations |
|
|
|
|
|
|
| Cite TRL as: |
| |
| ```bibtex |
| @misc{vonwerra2022trl, |
| title = {{TRL: Transformer Reinforcement Learning}}, |
| author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, |
| year = 2020, |
| journal = {GitHub repository}, |
| publisher = {GitHub}, |
| howpublished = {\url{https://github.com/huggingface/trl}} |
| } |
| ``` |