MATT-Bench / README.md
Yayuan Li
formatting
e2b50a5
metadata
license: cc-by-4.0
task_categories:
  - video-classification
  - video-text-to-text
  - object-detection
tags:
  - egocentric-video
  - mistake-detection
  - temporal-localization
  - video-language-grounding
  - hand-object-interaction
  - action-recognition
  - procedural-activities
  - semantic-role-labeling
  - ego4d
  - epic-kitchens
  - point-of-no-return
  - cvpr2026
pretty_name: MATT-Bench
size_categories:
  - 100K<n<1M

Mistake Attribution: Fine-Grained Mistake Understanding in Egocentric Videos

CVPR 2026

Yayuan Li1, Aadit Jain1, Filippos Bellos1, Jason J. Corso1,2

1University of Michigan, 2Voxel51

[Paper] [Code] [Project Page]


Dataset coming soon. We are preparing the data for public release. Stay tuned!

MATT-Bench Overview

MATT-Bench provides two large-scale benchmarks for Mistake Attribution (MATT) — a task that goes beyond binary mistake detection to attribute what semantic role was violated, when the mistake became irreversible (Point-of-No-Return), and where the mistake occurred in the frame.

The benchmarks are constructed by MisEngine, a data engine that automatically creates mistake samples with attribution-rich annotations from existing egocentric action datasets:

Dataset Samples Instruction Texts Semantic Temporal Spatial
Ego4D-M 257,584 16,099
EPIC-KITCHENS-M 221,094 12,283

These are at least two orders of magnitude larger than any existing mistake dataset.

Annotations

Each sample consists of an instruction text and an attempt video, annotated with:

  • Semantic Attribution: Which semantic role (predicate, object) in the instruction is violated in the attempt video
  • Temporal Attribution: The Point-of-No-Return (PNR) frame where the mistake becomes irreversible (Ego4D-M)
  • Spatial Attribution: Bounding box localizing the mistake region in the PNR frame (Ego4D-M)

Citation

@inproceedings{li2026mistakeattribution,
  title     = {Mistake Attribution: Fine-Grained Mistake Understanding in Egocentric Videos},
  author    = {Li, Yayuan and Jain, Aadit and Bellos, Filippos and Corso, Jason J.},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2026},
}