How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="simplescaling/token-conditional-control")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("simplescaling/token-conditional-control")
model = AutoModelForCausalLM.from_pretrained("simplescaling/token-conditional-control")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

Model Summary

Use

This is the token-conditional control model for our paper. You can evaluate using the information here.

Training information

Visualize in Weights & Biases

  • TRL: 0.13.0
  • Transformers: 4.48.0
  • Pytorch: 2.3.1
  • Datasets: 3.0.1
  • Tokenizers: 0.21.0

Citation

@misc{muennighoff2025s1simpletesttimescaling,
      title={s1: Simple test-time scaling}, 
      author={Niklas Muennighoff and Zitong Yang and Weijia Shi and Xiang Lisa Li and Li Fei-Fei and Hannaneh Hajishirzi and Luke Zettlemoyer and Percy Liang and Emmanuel Candès and Tatsunori Hashimoto},
      year={2025},
      eprint={2501.19393},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.19393}, 
}
Downloads last month
12
Safetensors
Model size
33B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for simplescaling/token-conditional-control

Base model

Qwen/Qwen2.5-32B
Finetuned
(1217)
this model

Paper for simplescaling/token-conditional-control