Yayuan Li commited on
Commit
4e57a65
·
1 Parent(s): 08b66a7

Add dataset card with coming soon notice, metadata, and links

Browse files
Files changed (1) hide show
  1. README.md +74 -3
README.md CHANGED
@@ -1,3 +1,74 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - video-classification
5
+ - video-text-to-text
6
+ - object-detection
7
+ tags:
8
+ - egocentric-video
9
+ - mistake-detection
10
+ - temporal-localization
11
+ - video-language-grounding
12
+ - hand-object-interaction
13
+ - action-recognition
14
+ - procedural-activities
15
+ - semantic-role-labeling
16
+ - ego4d
17
+ - epic-kitchens
18
+ - point-of-no-return
19
+ - cvpr2026
20
+ pretty_name: MATT-Bench
21
+ size_categories:
22
+ - 100K<n<1M
23
+ ---
24
+
25
+ # MATT-Bench
26
+
27
+ > **Dataset coming soon.** We are preparing the data for public release. Stay tuned!
28
+
29
+ **Mistake Attribution: Fine-Grained Mistake Understanding in Egocentric Videos** (CVPR 2026)
30
+
31
+ ## Overview
32
+
33
+ MATT-Bench provides two large-scale benchmarks for **Mistake Attribution (MATT)** — a task that goes beyond binary mistake detection to attribute *what* semantic role was violated, *when* the mistake became irreversible (Point-of-No-Return), and *where* the mistake occurred in the frame.
34
+
35
+ The benchmarks are constructed by **MisEngine**, a data engine that automatically creates mistake samples with attribution-rich annotations from existing egocentric action datasets:
36
+
37
+ | Dataset | Samples | Instruction Texts | Semantic | Temporal | Spatial |
38
+ |---|---|---|---|---|---|
39
+ | **Ego4D-M** | 257,584 | 16,099 | ✓ | ✓ | ✓ |
40
+ | **EPIC-KITCHENS-M** | 221,094 | 12,283 | ✓ | — | — |
41
+
42
+ These are at least **two orders of magnitude larger** than any existing mistake dataset.
43
+
44
+ ## Annotations
45
+
46
+ Each sample consists of an instruction text and an attempt video, annotated with:
47
+
48
+ - **Semantic Attribution**: Which semantic role (predicate, object) in the instruction is violated in the attempt video
49
+ - **Temporal Attribution**: The Point-of-No-Return (PNR) frame where the mistake becomes irreversible (Ego4D-M)
50
+ - **Spatial Attribution**: Bounding box localizing the mistake region in the PNR frame (Ego4D-M)
51
+
52
+ ## Links
53
+
54
+ - [Paper (arXiv)](https://arxiv.org/abs/2511.20525)
55
+ - [Code (GitHub)](https://github.com/yayuanli/MATT)
56
+ - [Project Page](https://yayuanli.github.io/MATT/)
57
+
58
+ ## Authors
59
+
60
+ - [Yayuan Li](https://www.linkedin.com/in/yayuan-li-148659272/) — University of Michigan
61
+ - [Aadit Jain](https://www.linkedin.com/in/jain-aadit/) — University of Michigan
62
+ - [Filippos Bellos](https://www.linkedin.com/in/filippos-bellos-168595156/) — University of Michigan
63
+ - [Jason J. Corso](https://www.linkedin.com/in/jason-corso/) — University of Michigan, Voxel51
64
+
65
+ ## Citation
66
+
67
+ ```bibtex
68
+ @inproceedings{li2026mistakeattribution,
69
+ title = {Mistake Attribution: Fine-Grained Mistake Understanding in Egocentric Videos},
70
+ author = {Li, Yayuan and Jain, Aadit and Bellos, Filippos and Corso, Jason J.},
71
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
72
+ year = {2026},
73
+ }
74
+ ```