license: cc-by-nc-4.0
task_categories:
- question-answering
- visual-question-answering
language:
- en
- zh
tags:
- personal-memory
- multimodal
- long-term-memory
- retrieval-augmented-generation
- benchmark
pretty_name: ATM-Bench
size_categories:
- 1K<n<10K
ATM-Bench: Long-Term Personalized Referential Memory QA
ATM-Bench is the first benchmark for multimodal, multi-source personalized referential memory QA over long time horizons (~4 years) with evidence-grounded retrieval and answering.
Paper: According to Me: Long-Term Personalized Referential Memory QA
Overview
Existing long-term memory benchmarks focus primarily on dialogue history, failing to capture realistic personalized references grounded in lived experience. ATM-Bench addresses this gap with:
- Multimodal and multi-source data: 3,759 images, 533 videos, and 6,742 emails spanning ~4 years
- Referential queries: Resolving personalized references (e.g., "Show me the moments where Grace was trying to be sneaky...")
- Evidence-grounded: Human-annotated QA pairs with ground-truth memory evidence
- Multi-evidence reasoning: Queries requiring evidence from multiple sources
- NIAH evaluation: Needle-In-A-Haystack protocol isolating reasoning from retrieval
Dataset Structure
data/
├── atm-bench/
│ ├── atm-bench.json # Full benchmark (1,013 questions)
│ ├── atm-bench-hard.json # Challenging evaluation split (31 questions)
│ └── niah/ # Needle-In-A-Haystack variants
│ ├── atm-bench-hard-niah25.json
│ ├── atm-bench-hard-niah50.json
│ ├── atm-bench-hard-niah100.json
│ └── atm-bench-hard-niah200.json
└── raw_memory/
├── email/
│ └── emails.json # 6,742 emails with summaries
├── image/ # 3,759 personal photos (.jpg)
├── video/ # 533 personal videos (.mp4)
└── geocoding_cache/ # Pre-computed reverse geocoding
├── image/ # 3,759 location cache files
└── video/ # 533 location cache files
QA Data Format
Each question in atm-bench.json and atm-bench-hard.json:
{
"id": "uuid",
"question": "How much did I pay for my hotel during my recent trip to Portugal?",
"answer": "€842.97",
"notes": "",
"evidence_ids": ["20250310_202208", "email202502110008", "email202502200013"],
"qtype": "number"
}
Question types:
| Type | ATM-Bench | ATM-Bench-Hard |
|---|---|---|
open_end |
514 | 13 |
number |
360 | 6 |
list_recall |
139 | 12 |
| Total | 1,013 | 31 |
NIAH variants add a niah_evidence_ids field containing the evidence pool (ground-truth + distractors).
Raw Memory
- Images: Personal photos with EXIF GPS and timestamps preserved.
- Videos: Personal videos re-encoded. GPS and timestamps preserved in MP4 metadata.
- Emails: Summarized emails with
id,timestamp,short_summary, anddetailfields. Institutional email addresses and specific identifying details have been redacted. - Geocoding cache: Pre-computed reverse geocoding results for GPS coordinates, avoiding repeated API calls during memory processing.
Memory Evidence IDs
Evidence IDs follow these conventions:
- Image/Video:
YYYYMMDD_HHMMSS(timestamp-based filename without extension) - Email:
emailYYYYMMDDNNNN(date + sequence number)
Usage
Download
from datasets import load_dataset
# Load QA data only
dataset = load_dataset("Jingbiao/ATM-Bench", data_files="data/atm-bench/*.json")
Or clone the full dataset (includes images/videos, ~3.1 GB):
# Install Git LFS first
git lfs install
git clone https://huggingface.co/datasets/Jingbiao/ATM-Bench
With the evaluation codebase
# Clone the codebase
git clone https://github.com/JingbiaoMei/ATM-Bench.git
cd ATM-Bench
# Place data under data/
# The repo expects: data/atm-bench/, data/raw_memory/
# See the GitHub repo for full evaluation instructions
Privacy and Ethics
This dataset is derived from real personal data with the data owner's consent. The following PII mitigations have been applied:
- Images: EXIF device identifiers (Make, Model, Software, ImageUniqueID) stripped; GPS and timestamps preserved as they are features of the benchmark.
- Videos: Removing original device metadata.
- Emails: Private Email addresses replaced with
[email_address]; private phone numbers replaced with[phone_number]; private website links replaced with[link]. - Sensitive visual content: Images containing sensitive information have been manually reviewed and redacted with black boxes.
- See the detailed ethical considerations in the paper for more information.
Citation
@article{mei2026atm,
title={According to Me: Long-Term Personalized Referential Memory QA},
author={Mei, Jingbiao and Chen, Jinghong and Yang, Guangyu and Hou, Xinyu and Li, Margaret and Byrne, Bill},
journal={arXiv preprint arXiv:2603.01990},
year={2026},
url={https://arxiv.org/abs/2603.01990},
doi={10.48550/arXiv.2603.01990}
}
License
This dataset is released under CC BY-NC 4.0. The accompanying code is released under the MIT License.
