# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("xx18/Composition-RL-4B-Physics_Math")
model = AutoModelForCausalLM.from_pretrained("xx18/Composition-RL-4B-Physics_Math")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Composition-RL-8B
This repository contains the Composition-RL-8B model, developed as part of the research presented in the paper Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models.
Model Description
Composition-RL is a data-efficient Reinforcement Learning with Verifiable Rewards (RLVR) approach designed to improve the reasoning capabilities of Large Language Models. It addresses the issue of "too-easy" prompts (pass-rate = 1) by automatically composing multiple verifiable problems into a single, harder verifiable prompt. This ensures the model continues to receive informative training signals throughout the RL process.
- Initial Model: Qwen3-8b-Base
- Training Dataset: MATH-Composition-199K
- Task: Mathematical Reasoning
- Paper: arXiv:2602.12036
- Code: GitHub - Composition-RL
Performance
As detailed in the paper, Composition-RL-8B consistently improves reasoning capability over RL trained on original, non-compositional datasets across various benchmarks.
Citation
If you find this work helpful, please consider citing:
@article{xu2026composition-rl,
title={Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models},
author={Xu, Xin and Bai, Clive and Yang, Kai and Chen, Tianhao and Chen, Yangkun and Liu, Weijie and Chen, Hao and Wang, Yang and Yang, Saiyong and Yang, Can},
journal={arXiv preprint arXiv:2602.12036},
year={2026}
}
- Downloads last month
- 22
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="xx18/Composition-RL-4B-Physics_Math") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)