metadata
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- simplescaling/s1K-1.1
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
Model Summary
s1.1 is our sucessor of s1 with better reasoning performance by leveraging reasoning traces from r1 instead of Gemini.
- Logs: https://wandb.ai/tikatoka-snu/s1/runs/x4q29quz
- Repository: simplescaling/s1
- Paper: https://arxiv.org/abs/2501.19393
This model is a successor of s1-32B with slightly better performance. Thanks to Bespoke Labs (Ryan Marten) for helping generate r1 traces for s1K with Curator.
Use
The model usage is documented here.
Note that s1-32B and s1.1-32B use budget forcing in this table; specifically ignoring end-of-thinking and appending "Wait" up to four times.
Model is trained with block_size 20000