m3eval / README.md
JadeHuang's picture
Update M3Eval dataset card README
eca5cc5 verified
metadata
pretty_name: M³Eval
language:
  - en
size_categories:
  - 1K<n<10K
task_categories:
  - visual-question-answering
  - multiple-choice
tags:
  - video
  - multimodal
  - benchmark
  - memory
  - long-video
configs:
  - config_name: default
    data_files:
      - split: full
        path: viewer/full_questions.csv

M³Eval: Multi-Modal Memory Evaluation through Cognitively-Grounded Video Tasks

Paper Project Page Hugging Face Dataset GitHub Code

News

  • 2026-05-13: We released the M³Eval benchmark, dataset, evaluation code, and project page.

M³Eval Overview

M³Eval overview

Abstract

As multi-modal models advance towards long-form video understanding, memory emerges as a critical capability. Despite substantial effort in developing video datasets and benchmarks, existing work primarily focuses on perception and reasoning, without systematically evaluating memory: what models retain, how faithfully information is preserved, and how robust memory remains under interference.

To address this gap, we introduce M³Eval, the first comprehensive evaluation framework and benchmark for probing different memory dimensions in multi-modal models.

Grounded in cognitive psychology, our design features carefully constructed tasks isolating key aspects of memory. Leveraging M³Eval, we conduct extensive experiments across representative multi-modal models, revealing consistent weaknesses and distinctive behaviors.

We find that models struggle to maintain disentangled representations when processing parallel video streams, exhibit interference patterns differing substantially from those observed in human memory, ground memory sources more reliably in the spatial domain than the temporal domain, and demonstrate limited symbolic memory.

Collectively, our benchmark provides a valuable resource for future research, whereas our findings highlight memory as a fundamental yet underexplored capability and offer insights for designing more effective memory mechanisms in multi-modal models.

Main Results

Divided Attention

Divided Attention main result

Divided Attention. Accuracy (%) on three divided attention metrics under the split-screen setting without swaps and with frequent left/right swaps.

Memory Interference

Memory Interference main result

Memory Interference. Proactive: first video (V1) interferes with recall of the second video (V2); retroactive: second video (V2) interferes with recall of the first video (V1). Delta denotes proactive minus retroactive.

Interleaved Events

Interleaved Events main result

Interleaved Events. Accuracy (%) on four interleaved reconstruction metrics.

N-Back

N-Back main result

N-Back. Average accuracy under two symbolic attributes, scene and action, averaged over all K and N configurations.

Dataset Details

  • schema_version: m3eval_hf_release_v2_single_video_nback
  • scope: adopted_open_source
  • subset: full

The benchmark is a composition of data from existing publicly available benchmarks. Each subset retains the original license of its source dataset. Users must comply with the respective licenses of all constituent datasets.

Question Counts

  • Non-N-back questions: 739
  • N-back questions: 1664
  • Total questions: 2403
  • N-back selected sequence cases: 64
  • N-back prefix videos: 512

Archive Groups

The full release contains 2,403 questions and uses single-video N-back prefix videos:

Archive group Files Source size
non_nback_memory_interference 110 128.671 GiB
nback_prefix_videos 512 54.904 GiB
non_nback_interleaved 134 151.713 GiB
non_nback_split_screen 207 205.905 GiB
Total 963 541.193 GiB

Upload Batches

  • batch_00: 128.671 GiB, files 3
  • batch_01: 144.904 GiB, files 3
  • batch_02: 106.713 GiB, files 3
  • batch_03: 135.000 GiB, files 3
  • batch_04: 25.905 GiB, files 1

N-back Format

N-back questions contain a single video path, for example videos/nback/action/...mp4. The legacy video_path_sequence and full_sequence_case_path fields are retained only as provenance metadata.

Repository Layout

README.md
manifests/
questions/
qa_root/
nback/
viewer/
archives/
checksums_sha256.txt
unpack_archives.sh

After running bash unpack_archives.sh, videos are materialized under videos/, including the N-back prefix videos under videos/nback/.

Dataset Usage

Download and Unpack

huggingface-cli download JadeHuang/m3eval \
  --repo-type dataset \
  --local-dir data/m3eval

bash data/m3eval/unpack_archives.sh

Use with the Evaluation Code

git clone https://github.com/Jie-1203/m3eval.git
cd m3eval/lmms-eval
uv pip install -e ".[all]"
cd ..

bash lmms-eval/scripts/run_m3eval_vllm.sh \
  --model_path /path/to/your/model \
  --task m3eval \
  --gpus 0 \
  --batch_size 1

Useful task names:

  • m3eval
  • m3eval_memory_interference
  • m3eval_split_screen
  • m3eval_interleaved
  • m3eval_nback

Dataset Examples

Divided Attention

Divided Attention dataset example

Divided Attention. Simultaneous memory for two side-by-side videos.

Memory Interference

Memory Interference dataset example

Memory Interference. Interference between sequentially presented videos.

Interleaved Events

Interleaved Events dataset example

Interleaved Events. Memory reconstruction from temporally interleaved clips.

N-Back

N-Back dataset example

N-Back. Judge whether the final clip matches the clip $N$ positions earlier.

Citation

If you use M³Eval in your work, please cite:

@article{huang2026m3eval,
  title   = {M3Eval: Multi-Modal Memory Evaluation through Cognitively-Grounded Video Tasks},
  author  = {Huang, Jie and Liu, Ruixun and Sun, Sirui and Yang, Xinyi and Li, Yin and Zhu, Yixin and Zhong, Yiwu},
  journal = {arXiv preprint arXiv:XXXX.XXXXX},
  year    = {2026}
}