Composition-RL-8B

Paper | Code

Composition-RL-8B is a large language model fine-tuned for enhanced reasoning using the Composition-RL framework. It was initialized from the Qwen3-8B-Base architecture and trained on the MATH-Composition-199K dataset.

Description

Composition-RL is a data-efficient Reinforcement Learning with Verifiable Rewards (RLVR) approach presented in the paper Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models.

The method addresses the issue of "too-easy" prompts (where pass rates reach 1) that occur as training progresses, which reduces the effective training signal. Composition-RL automatically composes multiple verifiable problems into a single, more challenging compositional prompt, ensuring the model continues to receive informative rewards throughout the reinforcement learning process.

  • Base Model: Qwen3-8B-Base
  • Training Method: Reinforcement Learning with Verifiable Rewards (RLVR)
  • Training Dataset: MATH-Composition-199K

Citation

If you find this work helpful for your research, please consider citing:

@article{xu2026composition-rl,
  title={Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models},
  author={Xu, Xin and Bai, Clive and Yang, Kai and Chen, Tianhao and Chen, Yangkun and Liu, Weijie and Chen, Hao and Wang, Yang and Yang, Saiyong and Yang, Can},
  journal={arXiv preprint arXiv:2602.12036},
  year={2026}
}
Downloads last month
244
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including xx18/Baseline-4B-MATH12K

Paper for xx18/Baseline-4B-MATH12K