LACE-Bench / README.md
lacebench's picture
Upload dataset (incl. keyword dicts)
42aa04f verified
metadata
license: cc-by-4.0
task_categories:
  - image-to-text
  - visual-question-answering
tags:
  - Multimodal benchmark
  - Vision-Language Models
  - Compositionality
  - Localism-aware compositionality
  - Multimodal knowledge editing
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/lace_train.parquet
      - split: test
        path: data/lace_test.parquet
  - config_name: keyword_dict
    data_files:
      - split: train
        path: data/train_keyword_dict.parquet
      - split: test
        path: data/test_keyword_dict.parquet

LACE-Bench: Localism-Aware Compositionality Evaluation Benchmark for Vision-Language Models

LACE-Bench is a benchmark for evaluating localism-aware compositionality in vision-language models (VLMs) — the ability to selectively integrate local region-level semantics with global scene-level understanding. It comprises two complementary tasks: LoGoCap and MMComE.

Dataset Card

Field Info
Tasks LoGoCap (Local & Global Compositional Captioning), MMComE (Multimodal Compositional Knowledge Editing)
Modality Vision-Language
Split Train (9,874 images) / Test (2,183 images)
Total 12,057 images
Image Source Visual Genome
License CC BY 4.0

Data Fields

Each record corresponds to one image and contains the following fields:

Field Type Description
image_id string Visual Genome image identifier
regions list[object] Annotated bounding box regions (atomic)
narratives string Description of the full image
keywords list[object] Key noun concepts grounded in WordNet
relation_centric_regions list[object] Groups of region IDs with a human-written relational annotation

regions

Each atomic region corresponds to a single object marked with a distinct colored bounding box.

Key Type Description
id string Region identifier ({image_id}_{region_index})
color string Bounding box color used for visual grounding (aqua / yellow / lime / red / blue / orange / magenta)
x, y float Top-left corner coordinates of the bounding box
width, height float Width and height of the bounding box
captions list[object] Human-annotated region-level captions (see below)
object_ids list[int] Linked object IDs from Visual Genome
relationships list[object] Scene graph relationships associated with this region (see below)

regions[].captions

Key Type Description
caption string Original human-written caption for the region (e.g. "the tall clock on the street")
counterfactual_caption string Minimally edited caption where one noun is replaced with a plausible but incorrect alternative (e.g. "the tall dart board on the street")

regions[].relationships

Key Type Description
relationship_id int Visual Genome relationship identifier
predicate string Relation predicate between subject and object (e.g. "on")
synsets list[string] WordNet synsets for the predicate (e.g. ["along.r.01"])
subject_id int Visual Genome object ID of the subject
object_id int Visual Genome object ID of the object

keywords

Each entry represents a key noun concept extracted from region captions and grounded in WordNet.

Key Type Description
synset_id string WordNet synset identifier (e.g. clock.n.01)
synonyms list[string] Lemma names belonging to this synset (e.g. ["clock"])
nearest_ancestor string Closest hypernym synset in the WordNet hierarchy (e.g. timepiece.n.01)
supersense string Broad semantic category from WordNet lexicographer files (e.g. noun.artifact, noun.person)
counterfactual list[object] Human-annotated counterfactual substitutions for this concept (see below)

keywords[].counterfactual

Key Type Description
human_annotation string Plausible but incorrect substitute chosen by a human annotator (e.g. "dart board")
candidate list[string] Candidate substitutions presented to the annotator for selection

counterfactual is empty ([]) for concepts where no counterfactual annotation was collected.


relation_centric_regions

Each entry groups multiple atomic regions and provides a human-written description of the relational context among them.

Key Type Description
human_annotation string Free-form description of the spatial or semantic relationship among the grouped regions (e.g. "The central clock tower... stands as a focal point against the backdrop of the building's pillars.")
region_ids list[string] IDs of the atomic regions involved in this relational group (e.g. ["2358647_0", "2358647_1"])

keyword_dict Config

In addition to the per-image records (default config), the dataset ships a keyword_dict config that maps each WordNet synset ID to the list of surface phrases observed for that concept across the corresponding split's region captions. These dictionaries are useful for keyword-based lookup, counterfactual phrase matching, and lexical normalization of mentions.

Files

  • data/train_keyword_dict.parquet
  • data/test_keyword_dict.parquet

Schema

Field Type Description
synset_id string WordNet synset identifier (e.g. tree.n.01), matching keywords[].synset_id in the default config
phrases list[string] All distinct surface phrases (synonyms, plural forms, modifier-noun variants, casing variants) observed for the synset in that split

Example rows

synset_id phrases
leaf.n.01 ["leaves", "foliage", "leaf", "banana leaf", "dried leaves", "green leaves", ...]
tree.n.01 ["tree", "trunk", "evergreen tree", "pine trees", "fir tree", ...]
bus.n.01 ["bus", "city bus", "double decker bus", "motorbus", "coach", ...]

Loading

from datasets import load_dataset
ds = load_dataset("lacebench/LACE-Bench", "keyword_dict")

Citation

@dataset{anonymous2026lacebench,
  title  = {LACE-Bench: Localism-Aware Compositionality Evaluation Benchmark for Vision-Language Models},
  author = {Anonymous},
  year   = {2026},
}