metadata
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- reasoning
- looped transformer
arxiv: 2511.08577
library_name: transformers
datasets:
- open-r1/Mixture-of-Thoughts
base_model:
- Qwen/Qwen3-1.7B-Base
This is the general version of Standard-1.7B, trained on a mixture of math, code, and science data, presented in the paper Think-at-Hard: Selective Latent Iterations to Improve Reasoning Language Models.
Please visit our GitHub repo for more information.
Sample Usage
Please see Github Example for sample usage.