lacebench commited on
Commit
ea6f59c
·
verified ·
1 Parent(s): e6bb13f

Add files using upload-large-folder tool

Browse files
Files changed (3) hide show
  1. README.md +148 -3
  2. data/lace_test.parquet +3 -0
  3. data/lace_train.parquet +3 -0
README.md CHANGED
@@ -1,3 +1,148 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - image-to-text
5
+ - visual-question-answering
6
+ tags:
7
+ - Multimodal benchmark
8
+ - Vision-Language Models
9
+ - Compositionality
10
+ - Localism-aware compositionality
11
+ - Multimodal knowledge editing
12
+ ---
13
+
14
+ # LACE-Bench: Localism-Aware Compositionality Evaluation Benchmark for Vision-Language Models
15
+
16
+ > **LACE-Bench** is a benchmark for evaluating *localism-aware compositionality* in vision-language models (VLMs) — the ability to selectively integrate local region-level semantics with global scene-level understanding. It comprises two complementary tasks: **LoGoCap** and **MMComE**.
17
+
18
+ ## Dataset Card
19
+
20
+ | Field | Info |
21
+ |---|---|
22
+ | **Tasks** | LoGoCap (Local & Global Compositional Captioning), MMComE (Multimodal Compositional Knowledge Editing) |
23
+ | **Modality** | Vision-Language |
24
+ | **Split** | Train (9,874 images) / Test (2,183 images) |
25
+ | **Total** | 12,057 images |
26
+ | **Image Source** | [Visual Genome](https://homes.cs.washington.edu/~ranjay/visualgenome/api.html) |
27
+ | **License** | CC BY 4.0 |
28
+
29
+ <!--
30
+ ## Tasks
31
+
32
+ ### 1. LoGoCap — Multi-grained Local and Global Compositional Captioning
33
+
34
+ LoGoCap evaluates a model's *static selectivity*: can it simultaneously understand the global scene while identifying and grounding constituent local objects?
35
+
36
+ - **Local captioning**: given an atomic region (a single object marked with a colored bounding box), generate a region-specific caption.
37
+ - **Global captioning**: given a compound region (a group of atomic regions), generate a single coherent caption that integrates all constituent local parts while introducing holistic scene-level context (moods, relations, atmosphere) not present in any individual local caption.
38
+
39
+ Evaluation uses standard captioning metrics (BLEU, ROUGE-1, METEOR) against human-annotated reference captions.
40
+
41
+ ### 2. MMComE — Multimodal Compositional Knowledge Editing
42
+
43
+ MMComE evaluates a model's *dynamic robustness*: can it apply a localized counterfactual edit (e.g., replacing *referee* with *spectator*) consistently across region-marked images, while preserving all unrelated global semantics?
44
+
45
+ A multimodal edit request is defined as a tuple `(I, r, ph → ph*)`, where:
46
+ - `I` is the image, `r` is the target region
47
+ - `ph` is the original phrase to be replaced, `ph*` is the counterfactual substitute
48
+
49
+ The model is evaluated on whether it correctly reflects `ph*` in both in-scope regions (edited) and correctly retains all out-of-scope regions (unedited).
50
+
51
+
52
+ ## Intended Use
53
+
54
+ LACE-Bench is designed for:
55
+
56
+ - Evaluating **localism-aware compositionality** — whether VLMs can selectively deploy local and global compositional operations as the task demands
57
+ - Measuring **global binding stability**: how consistently local semantic units of atomic regions bind into global captions
58
+ - Quantifying **cross-scale interference**: the degree to which local counterfactual edits propagate into unintended global semantic regions
59
+ - Benchmarking **fine-tuning strategies** (e.g., LoRA, blur+bbox visual grounding) for compositional captioning -->
60
+
61
+
62
+ ## Data Fields
63
+
64
+ Each record corresponds to one image and contains the following fields:
65
+
66
+ | Field | Type | Description |
67
+ |---|---|---|
68
+ | `image_id` | string | Visual Genome image identifier |
69
+ | `regions` | list[object] | Annotated bounding box regions (atomic) |
70
+ | `narratives` | string | Description of the full image |
71
+ | `keywords` | list[object] | Key noun concepts grounded in WordNet |
72
+ | `relation_centric_regions` | list[object] | Groups of region IDs with a human-written relational annotation |
73
+
74
+ ### `regions`
75
+
76
+ Each atomic region corresponds to a single object marked with a distinct colored bounding box.
77
+
78
+ | Key | Type | Description |
79
+ |---|---|---|
80
+ | `id` | string | Region identifier (`{image_id}_{region_index}`) |
81
+ | `color` | string | Bounding box color used for visual grounding (aqua / yellow / lime / red / blue / orange / magenta) |
82
+ | `x`, `y` | float | Top-left corner coordinates of the bounding box |
83
+ | `width`, `height` | float | Width and height of the bounding box |
84
+ | `captions` | list[object] | Human-annotated region-level captions (see below) |
85
+ | `object_ids` | list[int] | Linked object IDs from Visual Genome |
86
+ | `relationships` | list[object] | Scene graph relationships associated with this region (see below) |
87
+
88
+ **`regions[].captions`**
89
+
90
+ | Key | Type | Description |
91
+ |---|---|---|
92
+ | `caption` | string | Original human-written caption for the region (e.g. `"the tall clock on the street"`) |
93
+ | `counterfactual_caption` | string | Minimally edited caption where one noun is replaced with a plausible but incorrect alternative (e.g. `"the tall dart board on the street"`) |
94
+
95
+ **`regions[].relationships`**
96
+
97
+ | Key | Type | Description |
98
+ |---|---|---|
99
+ | `relationship_id` | int | Visual Genome relationship identifier |
100
+ | `predicate` | string | Relation predicate between subject and object (e.g. `"on"`) |
101
+ | `synsets` | list[string] | WordNet synsets for the predicate (e.g. `["along.r.01"]`) |
102
+ | `subject_id` | int | Visual Genome object ID of the subject |
103
+ | `object_id` | int | Visual Genome object ID of the object |
104
+
105
+ ---
106
+
107
+ ### `keywords`
108
+
109
+ Each entry represents a key noun concept extracted from region captions and grounded in WordNet.
110
+
111
+ | Key | Type | Description |
112
+ |---|---|---|
113
+ | `synset_id` | string | WordNet synset identifier (e.g. `clock.n.01`) |
114
+ | `synonyms` | list[string] | Lemma names belonging to this synset (e.g. `["clock"]`) |
115
+ | `nearest_ancestor` | string | Closest hypernym synset in the WordNet hierarchy (e.g. `timepiece.n.01`) |
116
+ | `supersense` | string | Broad semantic category from WordNet lexicographer files (e.g. `noun.artifact`, `noun.person`) |
117
+ | `counterfactual` | list[object] | Human-annotated counterfactual substitutions for this concept (see below) |
118
+
119
+ **`keywords[].counterfactual`**
120
+
121
+ | Key | Type | Description |
122
+ |---|---|---|
123
+ | `human_annotation` | string | Plausible but incorrect substitute chosen by a human annotator (e.g. `"dart board"`) |
124
+ | `candidate` | list[string] | Candidate substitutions presented to the annotator for selection |
125
+
126
+ > `counterfactual` is empty (`[]`) for concepts where no counterfactual annotation was collected.
127
+
128
+ ---
129
+
130
+ ### `relation_centric_regions`
131
+
132
+ Each entry groups multiple atomic regions and provides a human-written description of the relational context among them.
133
+
134
+ | Key | Type | Description |
135
+ |---|---|---|
136
+ | `human_annotation` | string | Free-form description of the spatial or semantic relationship among the grouped regions (e.g. `"The central clock tower... stands as a focal point against the backdrop of the building's pillars."`) |
137
+ | `region_ids` | list[string] | IDs of the atomic regions involved in this relational group (e.g. `["2358647_0", "2358647_1"]`) |
138
+
139
+
140
+ ## Citation
141
+
142
+ ```bibtex
143
+ @dataset{anonymous2026lacebench,
144
+ title = {LACE-Bench: Localism-Aware Compositionality Evaluation Benchmark for Vision-Language Models},
145
+ author = {Anonymous},
146
+ year = {2026},
147
+ }
148
+ ```
data/lace_test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41042c91e0228b5fa4e9b88bbe216294098ef76f5706a955de22e8f73d925fa6
3
+ size 19227949
data/lace_train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42d8839758aeeb9cc69c6b7bbe4c7f03783237525ebc522813aae830bfd49366
3
+ size 78457729