language:
- en
- zh
license: apache-2.0
tags:
- MoE
- Unified Generation
- Multi-modal
library_name: transformers
pipeline_tag: image-text-to-text
Uni-MoE 2.0-Thinking
Uni-MoE 2.0 is a fully open-source omnimodal model, it substantially advances the capabilities of Lychee's Uni-MoE series in language-centric multimodal understanding, reasoning, and generating.
Uni-MoE 2.0-Thinking is a thinking model obtained by training Uni-MoE 2.0-Base through a three-stage reinforcement learning process, possessing long-form reasoning capabilities.
If you enjoy our work or want timely updates, please give us a like and follow us.
Open-source Plan
- Model Checkpoint
- Inference Code: HITsz-TMG/Uni-MoE-2.0
- Training Code: HITsz-TMG/Uni-MoE-2.0
- Technical Report: arxiv
Model Introduction
1. Clone this repository and navigate to the Uni-MoE 2.0 folder
git clone https://github.com/HITsz-TMG/Uni-MoE.git
cd Uni-MoE-2
2. Set up the environment
Install the evaluation environment according to the requirements.
conda create -n uni_moe_2 python=3.11
conda activate uni_moe_2
pip install torch==2.5.1 torchaudio==2.5.1 torchvision==0.20.1
pip install -r requirements.txt
pip install flash-attn==2.6.0.post1 --no-build-isolation
pip install clip==1.0@git+https://github.com/openai/CLIP.git@dcba3cb2e2827b402d2701e7e1c7d9fed8a20ef1
Example Usage
We provide a simple example of the usage of this repo. For detailed usage, please refer to cookbook
import torch
from uni_moe.model.processing_qwen2_vl import Qwen2VLProcessor
from uni_moe.model.modeling_qwen_grin_moe import GrinQwen2VLForConditionalGeneration
from uni_moe.qwen_vl_utils import process_mm_info
from uni_moe.model import deepspeed_moe_inference_utils
processor = Qwen2VLProcessor.from_pretrained("HIT-TMG/Uni-MoE-2.0-Thinking")
model = GrinQwen2VLForConditionalGeneration.from_pretrained("HIT-TMG/Uni-MoE-2.0-Thinking", torch_dtype=torch.bfloat16).cuda()
processor.data_args = model.config
messages = [
{
"role": "system",
"content": "You are Uni-MoE-v2, a helpful multi-modal model. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> Thought section </think> Solution section. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines."
},
{
"role": "user",
"content": [
{"type": "image", "image": "examples/assets/image/thinking.jpg"},
{"type": "text", "text": "<image>
Hint: Please answer the question requiring an integer answer and provide the final value, e.g., 1, 2, 3, at the end.
Question: Several people compared how many Web pages they had visited. What is the mean of the numbers?'"},
],
}
]
texts = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
texts = texts.replace("<image>","<|vision_start|><|image_pad|><|vision_end|>").replace("<audio>","<|audio_start|><|audio_pad|><|audio_end|>").replace("<video>","<|vision_start|><|video_pad|><|vision_end|>")
image_inputs, video_inputs, audio_inputs = process_mm_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
audios=audio_inputs,
padding=True,
return_tensors="pt",
)
inputs["input_ids"] = inputs["input_ids"].unsqueeze(0)
inputs = inputs.to(device=model.device)
output_ids = model.generate(
**inputs,
use_cache=True,
pad_token_id=processor.tokenizer.eos_token_id,
max_new_tokens=8192,
temperature=1.0,
do_sample=True
)
text = processor.batch_decode(output_ids[:, inputs["input_ids"].shape[-1]:], skip_special_tokens=True)[0]
print(text)
π Model Structure
π Uni-MoE 2.0
We present Uni-MoE 2.0 from the Lychee family. As a fully open-source omnimodal large model (OLM), it substantially advances the capabilities of Lychee's Uni-MoE series in language-centric multimodal understanding, reasoning, and generating. Based on the Qwen2.5-7B dense architecture, we train Uni-MoE 2.0 from scratch through three core contributions: dynamic-capacity Mixture-of-Experts (MoE) design, a progressive training strategy enhanced with an iterative reinforcement strategy, and a carefully curated multimodal data matching technique. Uni-MoE 2.0 is capable of cross- and tri-modality understanding, as well as generating images, text, and speech.

π UniMoE-Audio
UniMoE-Audio introduces a dynamic-capacity routing mechanism based on Top-P sampling for adaptive expert allocation, together with a hybrid expert design that separates domain-specific computation (dynamic experts) from universal representations (shared experts). To address data imbalance and task conflicts, UniMoE-Audio adopts a structured three-stage training curriculum. From voice cloning and text-to-speech (TTS) to text-to-music (T2M) and video-to-music (V2M), UniMoE-Audio supports diverse creative workflows. Extensive experiments confirm its state-of-the-art performance and superior cross-task synergy, paving the way toward universal audio generation.

π Uni-MoE 1.0
The model architecture of Uni-MoE is shown below. Three training stages contain: 1) Utilize pairs from different modalities and languages to build connectors that map these elements to a unified language space, establishing a foundation for multimodal understanding; 2) Develop modality-specific experts using cross-modal data to ensure deep understanding, preparing for a cohesive multi-expert model; 3) Incorporate multiple trained experts into LLMs and refine the unified multimodal model using the LoRA technique on mixed multimodal data.

π Star History
β€οΈ Citation
If you find Uni-MoE useful for your research and applications, please cite using this BibTeX:
@ARTICLE{li_unimoe2omni,
author={Li, Yunxin and Chen Xinyu and Jiang, Shenyuan and Shi, Haoyuan and Liu, Zhenyu and Zhang, Xuanyu and Deng, Nanhao and Xu, Zhenran and Ma, Yicheng and Zhang, Meishan and Hu, Baotian and Zhang, Min},
journal={arXiv preprint arXiv:2511.12609},
title={Uni-MoE-2.0-Omni: Scaling Language-Centric Omnimodal Large Model with Advanced MoE, Training and Data},
year={2025},
}
@ARTICLE{li_unimoe,
author={Li, Yunxin and Jiang, Shenyuan and Hu, Baotian and Wang, Longyue and Zhong, Wanqi and Luo, Wenhan and Ma, Lin and Zhang, Min},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Uni-MoE: Scaling Unified Multimodal LLMs With Mixture of Experts},
year={2025},
volume={47},
number={5},
pages={3424-3439},
doi={10.1109/TPAMI.2025.3532688}}
@article{liu2025unimoe,
title={UniMoE-Audio: Unified Speech and Music Generation with Dynamic-Capacity MoE},
author={Liu, Zhenyu and Li, Yunxin and Zhang, Xuanyu and Teng, Qixun and Jiang, Shenyuan and Chen, Xinyu and Shi, Haoyuan and Li, Jinchao and Wang, Qi and Chen, Haolan and others},
journal={arXiv preprint arXiv:2510.13344},
year={2025}
}