Add model card and metadata
#1
by nielsr HF Staff - opened
README.md
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: transformers
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# Composition-RL-8B
|
| 7 |
+
|
| 8 |
+
This repository contains the **Composition-RL-8B** model, developed as part of the research presented in the paper [Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models](https://huggingface.co/papers/2602.12036).
|
| 9 |
+
|
| 10 |
+
## Model Description
|
| 11 |
+
Composition-RL is a data-efficient Reinforcement Learning with Verifiable Rewards (RLVR) approach designed to improve the reasoning capabilities of Large Language Models. It addresses the issue of "too-easy" prompts (pass-rate = 1) by automatically composing multiple verifiable problems into a single, harder verifiable prompt. This ensures the model continues to receive informative training signals throughout the RL process.
|
| 12 |
+
|
| 13 |
+
- **Initial Model:** Qwen3-8b-Base
|
| 14 |
+
- **Training Dataset:** [MATH-Composition-199K](https://huggingface.co/datasets/xx18/MATH-Composition-199K)
|
| 15 |
+
- **Task:** Mathematical Reasoning
|
| 16 |
+
- **Paper:** [arXiv:2602.12036](https://arxiv.org/abs/2602.12036)
|
| 17 |
+
- **Code:** [GitHub - Composition-RL](https://github.com/XinXU-USTC/Composition-RL)
|
| 18 |
+
|
| 19 |
+
## Performance
|
| 20 |
+
As detailed in the paper, Composition-RL-8B consistently improves reasoning capability over RL trained on original, non-compositional datasets across various benchmarks.
|
| 21 |
+
|
| 22 |
+
## Citation
|
| 23 |
+
If you find this work helpful, please consider citing:
|
| 24 |
+
```bibtex
|
| 25 |
+
@article{xu2026composition-rl,
|
| 26 |
+
title={Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models},
|
| 27 |
+
author={Xu, Xin and Bai, Clive and Yang, Kai and Chen, Tianhao and Chen, Yangkun and Liu, Weijie and Chen, Hao and Wang, Yang and Yang, Saiyong and Yang, Can},
|
| 28 |
+
journal={arXiv preprint arXiv:2602.12036},
|
| 29 |
+
year={2026}
|
| 30 |
+
}
|
| 31 |
+
```
|