PersonaVLM: Long-Term Personalized Multimodal LLMs (CVPR 2026)
π News: Our paper "PersonaVLM: Long-Term Personalized Multimodal LLMs" is accepted to CVPR 2026!
π Introduction
PersonaVLM is an innovative personalized multimodal agent framework designed for long-term personalization. It transforms a general-purpose MLLM into a personalized assistant by integrating three key capabilities:
- Remembering: Proactively extracts and summarizes multimodal memories into a personalized database.
- Reasoning: Conducts multi-turn reasoning by retrieving relevant memories from a multi-type memory architecture (core, semantic, episodic, and procedural).
- Response Alignment: Infers the user's evolving personality using a Momentum-based Personality Evolving Mechanism (PEM) to ensure aligned outputs.
π Persona-MME Benchmark
We establish Persona-MME, a comprehensive benchmark comprising over 2,000 curated interaction cases across 14 fine-grained tasks to assess long-term MLLM personalization.
π Official Resources
This project consists of several components. You can access the model weights, training data, benchmark, and code via the links below:
| Resource | Link |
|---|---|
| π Project Page | https://PersonaVLM.github.io |
| π» Official Code | GitHub: PersonaVLM |
| π€ Model Weights | HF: PersonaVLM (Qwen2.5-VL-7B) |
| π Benchmark | HF: Persona-MME (2,000+ cases) |
| π Training Data | HF: PersonaVLM-Dataset (80k+ samples) |
βοΈ Citation
If you find our work helpful, please cite our paper:
@inproceedings{nie2026personavlm,
title={PersonaVLM: Long-Term Personalized Multimodal LLMs},
author={Nie, Chang and Fu, Chaoyou and Zhang, Yifan and Yang, Haihua and Shan, Caifeng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2026}
}
- Downloads last month
- 41
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for ClareNie/PersonaVLM
Base model
Qwen/Qwen2.5-VL-7B-Instruct