Multiverse: Your Language Models Secretly Decide How to Parallelize and Merge Generation
Paper • 2506.09991 • Published • 55
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Multiverse4FM/Autogressive-32B")
model = AutoModelForCausalLM.from_pretrained("Multiverse4FM/Autogressive-32B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Autoregressive-32B is a baseline of our Multiverse-32B built on autoregressive modeling.
The model usage is documented here.
| Model | AIME24 | AIME25 | MATH500 | GPQA-Diamond |
|---|---|---|---|---|
| s1-32B | 35.4 | 25.8 | 88.6 | 48.0 |
| s1.1-32B | 52.9 | 41.7 | 93.4 | 62.6 |
| Qwen2.5-32B-Instruct | 15.8 | 10.4 | 80.4 | 47.0 |
| Autoregressive-32B | 54.6 | 45.0 | 92.8 | 61.6 |
| Multiverse-32B-zero | 52.1 | 44.2 | 92.4 | 63.6 |
| Multiverse-32B | 53.8 | 45.8 | 91.8 | 60.7 |
Thanks to the amazing s1 team for their s1.1 dataset as base data, and the Qwen team for their Qwen-2.5-32B-Instruct as base model.
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Multiverse4FM/Autogressive-32B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)