---
pretty_name: M³Eval
language:
- en
size_categories:
- 1K
## News
- **2026-05-13**: We released the M³Eval benchmark, dataset, evaluation code, and project page.
## M³Eval Overview
### Abstract
As multi-modal models advance towards long-form video understanding, memory emerges as a critical capability. Despite substantial effort in developing video datasets and benchmarks, existing work primarily focuses on perception and reasoning, without systematically evaluating memory: what models retain, how faithfully information is preserved, and how robust memory remains under interference.
To address this gap, we introduce M³Eval, the first comprehensive evaluation framework and benchmark for probing different memory dimensions in multi-modal models.
Grounded in cognitive psychology, our design features carefully constructed tasks isolating key aspects of memory. Leveraging M³Eval, we conduct extensive experiments across representative multi-modal models, revealing consistent weaknesses and distinctive behaviors.
We find that models struggle to maintain disentangled representations when processing parallel video streams, exhibit interference patterns differing substantially from those observed in human memory, ground memory sources more reliably in the spatial domain than the temporal domain, and demonstrate limited symbolic memory.
Collectively, our benchmark provides a valuable resource for future research, whereas our findings highlight memory as a fundamental yet underexplored capability and offer insights for designing more effective memory mechanisms in multi-modal models.
## Main Results
### Divided Attention

Accuracy (%) on three divided attention metrics under the split-screen setting without swaps and with frequent left/right swaps.
### Memory Interference

Proactive: first video (V1) interferes with recall of the second video (V2); retroactive: second video (V2) interferes with recall of the first video (V1). Delta denotes proactive minus retroactive.
### Interleaved Events

Accuracy (%) on four interleaved reconstruction metrics.
### N-Back

Average accuracy under two symbolic attributes, scene and action, averaged over all K and N configurations.
## Dataset Usage
### Download and Unpack
```bash
huggingface-cli download JadeHuang/m3eval \
--repo-type dataset \
--local-dir data/m3eval
bash data/m3eval/unpack_archives.sh
```
### Use with the Evaluation Code
```bash
git clone https://github.com/Jie-1203/m3eval.git
cd m3eval/lmms-eval
uv pip install -e ".[all]"
cd ..
bash lmms-eval/scripts/run_m3eval_vllm.sh \
--model_path /path/to/your/model \
--task m3eval \
--gpus 0 \
--batch_size 1
```
Useful task names:
- `m3eval`
- `m3eval_memory_interference`
- `m3eval_split_screen`
- `m3eval_interleaved`
- `m3eval_nback`
## Dataset Examples
Click to expand more examples
### Divided Attention

Simultaneous memory for two side-by-side videos.
### Memory Interference

Interference between sequentially presented videos.
### Interleaved Events

Memory reconstruction from temporally interleaved clips.
### N-Back

Judge whether the final clip matches the clip $N$ positions earlier.
## Citation
If you use M³Eval in your work, please cite:
```bibtex
@article{huang2026m3eval,
title = {M3Eval: Multi-Modal Memory Evaluation through Cognitively-Grounded Video Tasks},
author = {Huang, Jie and Liu, Ruixun and Sun, Sirui and Yang, Xinyi and Li, Yin and Zhu, Yixin and Zhong, Yiwu},
journal = {arXiv preprint arXiv:XXXX.XXXXX},
year = {2026}
}
```