aaron16 commited on
Commit
0759130
·
verified ·
1 Parent(s): f06e369

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +111 -3
README.md CHANGED
@@ -1,3 +1,111 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: MMNeedle
3
+ language:
4
+ - en
5
+ license: cc-by-4.0
6
+ homepage: https://mmneedle.github.io/
7
+ paper: https://arxiv.org/abs/2406.11230
8
+ tagline: Benchmarking long-context multimodal retrieval under extreme visual context lengths.
9
+ tags:
10
+ - multimodal
11
+ - visual-question-answering
12
+ - long-context
13
+ - evaluation
14
+ size_categories:
15
+ - 100K<n<1M
16
+ ---
17
+
18
+ # MMNeedle
19
+
20
+ MMNeedle is a stress test for long-context multimodal reasoning. Each example
21
+ contains a sequence of haystack images created by stitching MS COCO sub-images
22
+ into 1×1, 2×2, 4×4, or 8×8 grids. Given textual needle descriptions (derived
23
+ from MS COCO captions), models must predict which haystack image and which
24
+ sub-image cell matches the caption—or report that the needle is absent.
25
+
26
+ This dataset card accompanies the official Hugging Face release so researchers no
27
+ longer need to download from Google Drive or regenerate the benchmark from MS
28
+ COCO.
29
+
30
+ ## Dataset structure
31
+
32
+ - **Sequences (`sequence_length`)**: either a single stitched image or a set of 10 stitched images.
33
+ - **Grid sizes (`grid_rows`, `grid_cols`)**: {1, 2, 4, 8} with square layouts.
34
+ - **Needles per query (`needles_per_query`)**: {1, 2, 5}. Each query provides that many captions.
35
+ - **Examples per configuration**: 10,000. Half contain the needle(s); half are negatives.
36
+ - **Total examples**: 210,000 (21 configurations × 10k samples).
37
+
38
+ Every example stores the full list of haystack image paths, the ground-truth
39
+ needle locations (`image_index`, `row`, `col`), the MS COCO image IDs for the
40
+ needles, the natural-language captions, and a `has_needle` boolean.
41
+
42
+ ## Usage
43
+
44
+ ```python
45
+ from datasets import load_dataset
46
+
47
+ ds = load_dataset("Wang-ML-Lab/MMNeedle", split="test")
48
+ example = ds[0]
49
+ print(example.keys())
50
+ # dict_keys(['id', 'sequence_length', 'grid_rows', 'grid_cols', 'needles_per_query',
51
+ # 'haystack_images', 'needle_locations', 'needle_image_ids',
52
+ # 'needle_captions', 'has_needle'])
53
+ ```
54
+
55
+ Each entry in `haystack_images` is a PIL-compatible image object. `needle_captions`
56
+ contains one string per requested needle (even for negative examples, where the
57
+ corresponding location is `(-1, -1, -1)`).
58
+
59
+ ## Data fields
60
+
61
+ | Field | Type | Description |
62
+ | --- | --- | --- |
63
+ | `id` | string | Unique identifier combining configuration and sample id. |
64
+ | `sequence_length` | int | Number of stitched haystack images shown to the model. |
65
+ | `grid_rows`, `grid_cols` | int | Dimensions of the stitched grid (each cell is 256×256 px). |
66
+ | `needles_per_query` | int | Number of captions provided for the sample (1, 2, or 5). |
67
+ | `haystack_images` | list of `Image` | Ordered haystack images for the sequence. |
68
+ | `needle_locations` | list of dict | One dict per caption with `image_index`, `row`, and `col` (−1 when absent). |
69
+ | `needle_image_ids` | list of string | MS COCO filenames that generated each caption. |
70
+ | `needle_captions` | list of string | MS COCO captions used as the needle descriptions. |
71
+ | `has_needle` | bool | True if at least one caption corresponds to a haystack cell. |
72
+
73
+ ## Recommended evaluation protocol
74
+
75
+ 1. Feed the ordered haystack images (preserving grid layout) plus the instruction
76
+ template from the MMNeedle paper to your multimodal model.
77
+ 2. Parse the model output into `(image_index, row, col)` triples.
78
+ 3. Compare against `needle_locations` to compute accuracy for positives and the
79
+ false-positive rate for negatives.
80
+
81
+ See the repository’s `needle.py` for a reference implementation.
82
+
83
+ ## Source data
84
+
85
+ - **Images & Captions**: MS COCO 2014 validation split (CC BY 4.0).
86
+ - **Needle Metadata**: Automatically generated by the MMNeedle authors; included
87
+ here as JSON files.
88
+
89
+ ## Licensing
90
+
91
+ All stitched haystack images inherit the [Creative Commons Attribution 4.0
92
+ License](https://creativecommons.org/licenses/by/4.0/) from MS COCO. Attribution
93
+ at minimum should cite both MMNeedle and MS COCO.
94
+
95
+ ## Citations
96
+
97
+ ```
98
+ @article{wang2024mmneedle,
99
+ title={Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal Large Language Models},
100
+ author={Wang, Hengyi and Shi, Haizhou and Tan, Shiwei and Qin, Weiyi and Wang, Wenyuan and Zhang, Tunyu and Nambi, Akshay and Ganu, Tanuja and Wang, Hao},
101
+ journal={arXiv preprint arXiv:2406.11230},
102
+ year={2024}
103
+ }
104
+
105
+ @article{lin2014microsoft,
106
+ title={Microsoft COCO: Common Objects in Context},
107
+ author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C. Lawrence},
108
+ journal={ECCV},
109
+ year={2014}
110
+ }
111
+ ```