Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 1,477 Bytes
e5587d5
 
 
b362381
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---

license: apache-2.0
---


# MMRL30k: A Diverse Training Dataset for Reinforcement Learning Used by Shuffle-R1


The training data contains 2.1k samples from Geometry3K and 27k random selected samples from MM-EUREKA dataset. Each sample in the dataset follows the format below:
```

{

    "problem": "your problem",  # type: str

    "images": [{"bytes": image_bytes, "path": None}],  # type: list[dict]

    "answer": "your answer",  # type: str

    "source": "data source"  # type: str, not used in training

}

```

## Usage
The training data follows the format of [**EasyR1**](https://github.com/hiyouga/EasyR1). 

Refer to [**Shuffle-R1**](https://github.com/xiaomi-research/shuffle-r1) for training usage.


## Acknowledgement
The training data is collected from [**Geometry3K**](https://huggingface.co/datasets/hiyouga/geometry3k) and [**MM-EUREKA dataset**](https://huggingface.co/datasets/FanqingM/MM-Eureka-Dataset)

## Citation

If you find our work useful for your research, please consider citing:
```

@misc{zhu2025shuffler1,

      title={Shuffle-R1: Efficient RL framework for Multimodal Large Language Models via Data-centric Dynamic Shuffle}, 

      author={Linghao Zhu, Yiran Guan, Dingkang Liang, Jianzhong Ju, Zhenbo Luo, Bin Qin, Jian Luan, Yuliang Liu, Xiang Bai},

      year={2025},

      eprint={2508.05612},

      archivePrefix={arXiv},

      primaryClass={cs.LG},

      url={https://arxiv.org/abs/2508.05612}, 

}

```