UME-rl-train / README.md
zhibinlan's picture
Update README.md
9d7a9b0 verified
metadata
license: apache-2.0
task_categories:
  - text-retrieval
  - text-classification
language:
  - en
tags:
  - multimodal
pretty_name: UME-rl-train

UME-rl-train

Introduction

The dataset is constructed based on the training set of MMEB-V2. The dataset is evenly sampled from the MMEB-V2 dataset, which covers image, video, and visual document modalities, ensuring that the total amount of data across different modalities is approximately balanced, as well as the amount of data within each dataset under the same modality. This results in a final set of 11,136 RL pairs.

Citation

If you find our work useful, please consider citing it.

@article{lan2025ume,
  title={UME-R1: Exploring Reasoning-Driven Generative Multimodal Embeddings},
  author={Lan, Zhibin and Niu, Liqiang and Meng, Fandong and Zhou, Jie and Su, Jinsong},
  journal={arXiv preprint arXiv:2511.00405},
  year={2025}
}