Datasets:
Modalities:
Text
Size:
10K - 100K
Tags:
Multimodal benchmark
Vision-Language Models
Compositionality
Localism-aware compositionality
Multimodal knowledge editing
License:
| license: cc-by-4.0 | |
| task_categories: | |
| - image-to-text | |
| - visual-question-answering | |
| tags: | |
| - Multimodal benchmark | |
| - Vision-Language Models | |
| - Compositionality | |
| - Localism-aware compositionality | |
| - Multimodal knowledge editing | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/lace_train.parquet | |
| - split: test | |
| path: data/lace_test.parquet | |
| - config_name: keyword_dict | |
| data_files: | |
| - split: train | |
| path: data/train_keyword_dict.parquet | |
| - split: test | |
| path: data/test_keyword_dict.parquet | |
| # LACE-Bench: Localism-Aware Compositionality Evaluation Benchmark for Vision-Language Models | |
| > **LACE-Bench** is a benchmark for evaluating *localism-aware compositionality* in vision-language models (VLMs) — the ability to selectively integrate local region-level semantics with global scene-level understanding. It comprises two complementary tasks: **LoGoCap** and **MMComE**. | |
| ## Dataset Card | |
| | Field | Info | | |
| |---|---| | |
| | **Tasks** | LoGoCap (Local & Global Compositional Captioning), MMComE (Multimodal Compositional Knowledge Editing) | | |
| | **Modality** | Vision-Language | | |
| | **Split** | Train (9,874 images) / Test (2,183 images) | | |
| | **Total** | 12,057 images | | |
| | **Image Source** | [Visual Genome](https://homes.cs.washington.edu/~ranjay/visualgenome/api.html) | | |
| | **License** | CC BY 4.0 | | |
| <!-- | |
| ## Tasks | |
| ### 1. LoGoCap — Multi-grained Local and Global Compositional Captioning | |
| LoGoCap evaluates a model's *static selectivity*: can it simultaneously understand the global scene while identifying and grounding constituent local objects? | |
| - **Local captioning**: given an atomic region (a single object marked with a colored bounding box), generate a region-specific caption. | |
| - **Global captioning**: given a compound region (a group of atomic regions), generate a single coherent caption that integrates all constituent local parts while introducing holistic scene-level context (moods, relations, atmosphere) not present in any individual local caption. | |
| Evaluation uses standard captioning metrics (BLEU, ROUGE-1, METEOR) against human-annotated reference captions. | |
| ### 2. MMComE — Multimodal Compositional Knowledge Editing | |
| MMComE evaluates a model's *dynamic robustness*: can it apply a localized counterfactual edit (e.g., replacing *referee* with *spectator*) consistently across region-marked images, while preserving all unrelated global semantics? | |
| A multimodal edit request is defined as a tuple `(I, r, ph → ph*)`, where: | |
| - `I` is the image, `r` is the target region | |
| - `ph` is the original phrase to be replaced, `ph*` is the counterfactual substitute | |
| The model is evaluated on whether it correctly reflects `ph*` in both in-scope regions (edited) and correctly retains all out-of-scope regions (unedited). | |
| ## Intended Use | |
| LACE-Bench is designed for: | |
| - Evaluating **localism-aware compositionality** — whether VLMs can selectively deploy local and global compositional operations as the task demands | |
| - Measuring **global binding stability**: how consistently local semantic units of atomic regions bind into global captions | |
| - Quantifying **cross-scale interference**: the degree to which local counterfactual edits propagate into unintended global semantic regions | |
| - Benchmarking **fine-tuning strategies** (e.g., LoRA, blur+bbox visual grounding) for compositional captioning --> | |
| ## Data Fields | |
| Each record corresponds to one image and contains the following fields: | |
| | Field | Type | Description | | |
| |---|---|---| | |
| | `image_id` | string | Visual Genome image identifier | | |
| | `regions` | list[object] | Annotated bounding box regions (atomic) | | |
| | `narratives` | string | Description of the full image | | |
| | `keywords` | list[object] | Key noun concepts grounded in WordNet | | |
| | `relation_centric_regions` | list[object] | Groups of region IDs with a human-written relational annotation | | |
| ### `regions` | |
| Each atomic region corresponds to a single object marked with a distinct colored bounding box. | |
| | Key | Type | Description | | |
| |---|---|---| | |
| | `id` | string | Region identifier (`{image_id}_{region_index}`) | | |
| | `color` | string | Bounding box color used for visual grounding (aqua / yellow / lime / red / blue / orange / magenta) | | |
| | `x`, `y` | float | Top-left corner coordinates of the bounding box | | |
| | `width`, `height` | float | Width and height of the bounding box | | |
| | `captions` | list[object] | Human-annotated region-level captions (see below) | | |
| | `object_ids` | list[int] | Linked object IDs from Visual Genome | | |
| | `relationships` | list[object] | Scene graph relationships associated with this region (see below) | | |
| **`regions[].captions`** | |
| | Key | Type | Description | | |
| |---|---|---| | |
| | `caption` | string | Original human-written caption for the region (e.g. `"the tall clock on the street"`) | | |
| | `counterfactual_caption` | string | Minimally edited caption where one noun is replaced with a plausible but incorrect alternative (e.g. `"the tall dart board on the street"`) | | |
| **`regions[].relationships`** | |
| | Key | Type | Description | | |
| |---|---|---| | |
| | `relationship_id` | int | Visual Genome relationship identifier | | |
| | `predicate` | string | Relation predicate between subject and object (e.g. `"on"`) | | |
| | `synsets` | list[string] | WordNet synsets for the predicate (e.g. `["along.r.01"]`) | | |
| | `subject_id` | int | Visual Genome object ID of the subject | | |
| | `object_id` | int | Visual Genome object ID of the object | | |
| --- | |
| ### `keywords` | |
| Each entry represents a key noun concept extracted from region captions and grounded in WordNet. | |
| | Key | Type | Description | | |
| |---|---|---| | |
| | `synset_id` | string | WordNet synset identifier (e.g. `clock.n.01`) | | |
| | `synonyms` | list[string] | Lemma names belonging to this synset (e.g. `["clock"]`) | | |
| | `nearest_ancestor` | string | Closest hypernym synset in the WordNet hierarchy (e.g. `timepiece.n.01`) | | |
| | `supersense` | string | Broad semantic category from WordNet lexicographer files (e.g. `noun.artifact`, `noun.person`) | | |
| | `counterfactual` | list[object] | Human-annotated counterfactual substitutions for this concept (see below) | | |
| **`keywords[].counterfactual`** | |
| | Key | Type | Description | | |
| |---|---|---| | |
| | `human_annotation` | string | Plausible but incorrect substitute chosen by a human annotator (e.g. `"dart board"`) | | |
| | `candidate` | list[string] | Candidate substitutions presented to the annotator for selection | | |
| > `counterfactual` is empty (`[]`) for concepts where no counterfactual annotation was collected. | |
| --- | |
| ### `relation_centric_regions` | |
| Each entry groups multiple atomic regions and provides a human-written description of the relational context among them. | |
| | Key | Type | Description | | |
| |---|---|---| | |
| | `human_annotation` | string | Free-form description of the spatial or semantic relationship among the grouped regions (e.g. `"The central clock tower... stands as a focal point against the backdrop of the building's pillars."`) | | |
| | `region_ids` | list[string] | IDs of the atomic regions involved in this relational group (e.g. `["2358647_0", "2358647_1"]`) | | |
| ## `keyword_dict` Config | |
| In addition to the per-image records (`default` config), the dataset ships a `keyword_dict` config that maps each WordNet synset ID to the list of surface phrases observed for that concept across the corresponding split's region captions. These dictionaries are useful for keyword-based lookup, counterfactual phrase matching, and lexical normalization of mentions. | |
| **Files** | |
| - `data/train_keyword_dict.parquet` | |
| - `data/test_keyword_dict.parquet` | |
| **Schema** | |
| | Field | Type | Description | | |
| |---|---|---| | |
| | `synset_id` | string | WordNet synset identifier (e.g. `tree.n.01`), matching `keywords[].synset_id` in the `default` config | | |
| | `phrases` | list[string] | All distinct surface phrases (synonyms, plural forms, modifier-noun variants, casing variants) observed for the synset in that split | | |
| **Example rows** | |
| | `synset_id` | `phrases` | | |
| |---|---| | |
| | `leaf.n.01` | `["leaves", "foliage", "leaf", "banana leaf", "dried leaves", "green leaves", ...]` | | |
| | `tree.n.01` | `["tree", "trunk", "evergreen tree", "pine trees", "fir tree", ...]` | | |
| | `bus.n.01` | `["bus", "city bus", "double decker bus", "motorbus", "coach", ...]` | | |
| **Loading** | |
| ```python | |
| from datasets import load_dataset | |
| ds = load_dataset("lacebench/LACE-Bench", "keyword_dict") | |
| ``` | |
| ## Citation | |
| ```bibtex | |
| @dataset{anonymous2026lacebench, | |
| title = {LACE-Bench: Localism-Aware Compositionality Evaluation Benchmark for Vision-Language Models}, | |
| author = {Anonymous}, | |
| year = {2026}, | |
| } | |
| ``` | |