metadata
library_name: transformers
tags:
- multimodal
- reasoning
- sft
- rl
datasets:
- LightChen2333/M3CoT
- ModalityDance/Omni-Bench
base_model:
- GAIR/Anole-7b-v0.1
license: mit
pipeline_tag: any-to-any
Omni-R1-Zero
Omni-R1-Zero is trained without multimodal annotations. It bootstraps step-wise visualizations from text-only CoT seeds, then follows the SFT→RL recipe to learn interleaved multimodal reasoning.
Paper👁️ · Code🐙 · Omni-Bench🧪
Citation
@misc{cheng2026omnir1unifiedgenerativeparadigm,
title={Omni-R1: Towards the Unified Generative Paradigm for Multimodal Reasoning},
author={Dongjie Cheng and Yongqi Li and Zhixin Ma and Hongru Cai and Yupeng Hu and Wenjie Wang and Liqiang Nie and Wenjie Li},
year={2026},
eprint={2601.09536},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2601.09536},
}