UniPercept: Towards Unified Perceptual-Level Image Understanding across Aesthetics, Quality, Structure, and Texture
If you find this project useful, please give it a Star ⭐️. It means a lot to us!
⭐️ More Research:
- ArtiMuse: Fine-Grained Image Aesthetics Assessment with Joint Scoring and Expert-Level Understanding
🚀 News & Updates
- [Dec 29, 2025] 🔥 Official Release
- Technical Report
- Project Page
- UniPercept-Bench: A comprehensive evaluation suite for perceptual-level MLLMs, spanning Image Aesthetics Assessment (IAA), Image Quality Assessment (IQA), and Image Structure & Texture Assessment (ISTA) across Visual Rating (VR) and Visual Question Answering (VQA) tasks.
- UniPercept: A powerful baseline MLLM specialized for perceptual image understanding, optimized via Domain-Adaptive Pre-Training and Task-Aligned RL.
🌟 Abstract
Multimodal large language models (MLLMs) have achieved remarkable progress in visual understanding tasks such as visual grounding, segmentation, and captioning. However, their ability to perceive perceptual-level image features remains limited. In this work, we present UniPercept-Bench, a unified framework for perceptual-level image understanding across three key domains: Aesthetics, Quality, and Structure and Texture. We establish a hierarchical definition system and construct large-scale datasets to evaluate perceptual-level image understanding. Based on this foundation, we develop a strong baseline UniPercept trained via Domain-Adaptive Pre-Training and Task-Aligned RL, enabling robust generalization across both Visual Rating (VR) and Visual Question Answering (VQA) tasks. UniPercept outperforms existing MLLMs on perceptual-level image understanding and can serve as a plug-and-play reward model for text-to-image generation. This work defines Perceptual-Level Image Understanding in the era of MLLMs and, through the introduction of a comprehensive benchmark together with a strong baseline, provides a solid foundation for advancing perceptual-level multimodal image understanding.
📊 UniPercept-Bench
We introduce UniPercept-Bench, a systematic benchmark for perceptual image understanding:
Comprehensive Coverage: Spans 3 domains (IAA, IQA, ISTA), 17 categories, and 43 criteria.
Perceptual Tasks: Supports both Visual Rating (VR) and Visual Question Answering (VQA).
🔍 UniPerceptUniPercept is a strong baseline MLLM trained via Domain-Adaptive Pre-Training and Task-Aligned RL to handle both Visual Rating (VR) (continuous scoring) and Visual Question Answering (VQA) (reasoning).
🛠️ Setup
conda create -n unipercept python=3.10 conda activate unipercept cd UniPercept pip install -r requirements.txt📉 Evaluation
Please download the UniPercept weights from 🤗 UniPercept and place them in the
ckpt/directory.Visual Rating (VR)
Please download the datasets listed below and place them in the corresponding paths.
Dataset Domain Download Path ArtiMuse-10K IAA 🤗 Link benchmark/VR/IAA/ArtiMuse-10K/imageAVA IAA Link benchmark/VR/IAA/AVA/imageTAD66K IAA Link benchmark/VR/IAA/TAD66K/imageFLICKR-AES IAA Link benchmark/VR/IAA/FLICKR-AES/imageKonIQ-10K IQA Link benchmark/VR/IQA/KonIQ-10K/imageSPAQ IQA Link benchmark/VR/IQA/SPAQ/imageKADID IQA Link benchmark/VR/IQA/KADID/imagePIPAL IQA Link benchmark/VR/IQA/PIPAL/imageISTA-10K ISTA 🤗 Link benchmark/VR/ISTA/ISTA-10K/imageAfter setting up the data, you can configure the target datasets and devices in
src/eval/eval_vr.sh. The results will be saved toresults/vr.cd UniPercept bash src/eval/eval_vr.shVisual Question Answering (VQA)
Please download UniPercept-Bench-VQA from 🤗 UniPercept-Bench and place them into
benchmark/VQA. Then you can configure the target domain insrc/eval/eval_vqa.sh. The evaluation results will be saved toresults/vqa.cd UniPercept bash src/eval/eval_vqa.shInteractive Image Perception
You can engage in comprehensive conversations with UniPercept regarding various aspects of an image, such as its aesthetics, quality, and structural details. An example is provided below, which you can customize based on your needs, or refer to InternVL for further implementation details.
cd UniPercept bash src/eval/conversation.sh🏆 Performance
UniPercept consistently outperforms proprietary models (e.g., GPT-4o, Gemini-2.5-Pro), leading open-source models (InternVL3, Qwen3-VL) and across all three perceptual domains (IAA, IQA, ISTA) and tasks (VR, VQA).
Performance on UniPercept-Bench-VR
Performance on UniPercept-Bench-VQA (IAA)
Performance on UniPercept-Bench-VQA (IQA)
Performance on UniPercept-Bench-VQA (ISTA)
🎨 Applications
UniPercept As Reward
UniPercept can be used as a powerful reward model for post-training Text-to-Image (T2I) models. By integrating UniPercept rewards into the training of FLUX.1-dev, we observe significant improvements in aesthetic quality, structural richness, and prompt adherence.
UniPercept As Metrics
UniPercept can serve as an perceptual-level metric that assesses the quality of outputs from any model producing images, covering three complementary dimensions: IAA, IQA, and ISTA.
🖼️ UniPercept-Constructed Image Profiles
UniPercept performs comprehensive perceptual-level image analysis, delivering accurate visual ratings across the IAA, IQA, and ISTA dimensions, along with fine-grained multi-dimensional analytical outputs that together form a detailed image profile.
✏️ Citation
If you find UniPercept useful for your research, please consider citing our work:
@misc{cao2025uniperceptunifiedperceptuallevelimage, title={UniPercept: Towards Unified Perceptual-Level Image Understanding across Aesthetics, Quality, Structure, and Texture}, author={Shuo Cao and Jiayang Li and Xiaohui Li and Yuandong Pu and Kaiwen Zhu and Yuanting Gao and Siqi Luo and Yi Xin and Qi Qin and Yu Zhou and Xiangyu Chen and Wenlong Zhang and Bin Fu and Yu Qiao and Yihao Liu}, year={2025}, eprint={2512.21675}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2512.21675}, } @misc{cao2025artimusefinegrainedimageaesthetics, title={ArtiMuse: Fine-Grained Image Aesthetics Assessment with Joint Scoring and Expert-Level Understanding}, author={Shuo Cao and Nan Ma and Jiayang Li and Xiaohui Li and Lihao Shao and Kaiwen Zhu and Yu Zhou and Yuandong Pu and Jiarui Wu and Jiaquan Wang and Bo Qu and Wenhai Wang and Yu Qiao and Dajuin Yao and Yihao Liu}, year={2025}, eprint={2507.14533}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2507.14533}, }
- Downloads last month
- 1,196
Model tree for Thunderbolt215215/UniPercept
Base model
OpenGVLab/InternVL3-8B-Pretrained