Datasets:
image_id stringlengths 5 7 | regions listlengths 1 22 | narratives stringlengths 27 920 | keywords listlengths 3 46 | relation_centric_regions listlengths 1 3 |
|---|---|---|---|---|
2415803 | [
{
"id": "2415803_0",
"color": "red",
"x": 264.5,
"y": 70.125,
"width": 138,
"height": 233,
"captions": [
{
"caption": "rock forming part of low fence",
"counterfactual_caption": "cushion forming part of low fence"
},
{
"caption": "a stone in a wa... | In this image, there is a man and a woman standing and talking to each other. In the background, there are buildings, cars, trees, a group of people walking on the street, plants, poles, a dustbin, and the sky. | [
{
"synset_id": "fence.n.01",
"synonyms": [
"fence",
"fencing"
],
"nearest_ancestor": "barrier.n.01",
"supersense": "noun.artifact",
"counterfactual": []
},
{
"synset_id": "part.n.01",
"synonyms": [
"part",
"component part",
"constituent",
"port... | [
{
"human_annotation": "This image highlights a couple engaged in conversation, surrounded by rocks and stones, suggesting a casual interaction in a natural setting. This contrasts with the structured environment of the buildings.",
"region_ids": [
"2415803_0",
"2415803_1"
]
},
{
"hum... |
2338279 | [
{
"id": "2338279_0",
"color": "red",
"x": -0.9000015258789062,
"y": 5.524993896484375,
"width": 488,
"height": 360,
"captions": [
{
"caption": "snowboard on the ground",
"counterfactual_caption": "skateboard on the ground"
},
{
"caption": "white ... | In the image, we can see two people in the snow, and one of them is standing on a ski board. | [
{
"synset_id": "land.n.04",
"synonyms": [
"land",
"ground",
"dry land",
"earth",
"solid ground",
"terra firma"
],
"nearest_ancestor": "object.n.01",
"supersense": "noun.object",
"counterfactual": [
{
"human_annotation": "ceiling",
"ca... | [
{
"human_annotation": "The snowboard and ground create a dynamic scene, with people adding a lively atmosphere. This combination highlights the activity and interaction within the snowy environment, emphasizing movement and engagement.",
"region_ids": [
"2338279_0",
"2338279_1"
]
},
{
... |
2316204 | [
{
"id": "2316204_0",
"color": "red",
"x": 180.5,
"y": 53.125,
"width": 155,
"height": 326,
"captions": [
{
"caption": "jockey dressed in blue on dark brown horse",
"counterfactual_caption": "siting statue dressed in blue on dark brown horse"
},
{
... | In the foreground, a person is riding a horse. There are some birds on the left side of the image. The background is blurred.
| [
{
"synset_id": "horse.n.01",
"synonyms": [
"horse",
"Equus caballus"
],
"nearest_ancestor": "equine.n.01",
"supersense": "noun.animal",
"counterfactual": [
{
"human_annotation": "donkey",
"candidate": []
}
]
},
{
"synset_id": "jockey.n.01",... | [
{
"human_annotation": "Look at the dynamic interaction between the jockey and the horse's legs, emphasizing the synergy in motion. This connection is central to the image's focus on equestrian activity.",
"region_ids": [
"2316204_0",
"2316204_3"
]
},
{
"human_annotation": "There are ... |
2333110 | [
{
"id": "2333110_0",
"color": "red",
"x": 83.1448974609375,
"y": 85.86079406738281,
"width": 376,
"height": 176,
"captions": [
{
"caption": "black cat lying in purple folding chair",
"counterfactual_caption": "black dog lying in purple folding chair"
},
... | In this picture there is a black color cat playing in the chair. The chair is in purple color. We can observe a white color teddy bear on this cat. In the background there is a wall. | [
{
"synset_id": "cat.n.01",
"synonyms": [
"true cat",
"cat"
],
"nearest_ancestor": "feline.n.01",
"supersense": "noun.animal",
"counterfactual": [
{
"human_annotation": "dog",
"candidate": []
}
]
},
{
"synset_id": "chair.n.01",
"synonyms... | [
{
"human_annotation": "This image highlights a cat with a collar, comfortably nestled on a chair. This scene captures the cozy interaction between the pet and its resting place, emphasizing the cat's relaxed demeanor and the chair's supportive structure.",
"region_ids": [
"2333110_0",
"2333110_1... |
2368844 | [
{
"id": "2368844_0",
"color": "red",
"x": 108.5,
"y": 49.791656494140625,
"width": 135,
"height": 106,
"captions": [
{
"caption": "three white lights hanging from ceiling",
"counterfactual_caption": "three white bobber hanging from ceiling"
},
{
... | In this image we can see wooden railing, flower pots, wooden chair, refrigerator, table fan, table upon which few things are kept, flower pot, clock on the wall, photo frame, wooden cupboards, glass door. Lights to the ceiling in the background. | [
{
"synset_id": "ceiling.n.01",
"synonyms": [
"ceiling"
],
"nearest_ancestor": "upper_surface.n.01",
"supersense": "noun.artifact",
"counterfactual": []
},
{
"synset_id": "light.n.02",
"synonyms": [
"light",
"lights",
"light source"
],
"nearest_ance... | [
{
"human_annotation": "This image highlight the architectural elements of the ceiling, featuring skylights and beams. These elements create an open and airy atmosphere, enhancing the room's spaciousness and allowing natural light to illuminate the interior.",
"region_ids": [
"2368844_0",
"236884... |
2368659 | [
{
"id": "2368659_0",
"color": "red",
"x": 136.0028419494629,
"y": 109.40625,
"width": 163.0023651123047,
"height": 204.51214599609375,
"captions": [
{
"caption": "a batter wearing a safety helmet.",
"counterfactual_caption": "a golfer wearing a safety helmet."
... | This is the picture might be taken in a playground. In the image in middle there is a person wearing a helmet holding a bat on his hand and playing. On right side there are two more other persons playing the game. In background we can see a net fence and group of people are sitting. At bottom there is a grass and a lan... | [
{
"synset_id": "batter.n.01",
"synonyms": [
"hitter",
"batter",
"batsman",
"slugger"
],
"nearest_ancestor": "ballplayer.n.01",
"supersense": "noun.person",
"counterfactual": [
{
"human_annotation": "golfer",
"candidate": [
"linksman",
... | [
{
"human_annotation": "This image highlight the interaction between the batter and catcher. The batter is poised to hit, while the catcher is ready to receive, showcasing a crucial moment in the game.",
"region_ids": [
"2368659_0",
"2368659_1"
]
},
{
"human_annotation": "This image f... |
2368643 | [
{
"id": "2368643_0",
"color": "red",
"x": 77.875,
"y": 239.99609375,
"width": 206,
"height": 89,
"captions": [
{
"caption": "a white rug in front of couch.",
"counterfactual_caption": "a white yoga mat in front of couch."
},
{
"caption": "thick l... | This picture is taken in a room. In the right side there is a sofa. On that sofa there are some pillows. In the left side there are some brown color cupboards. There is a white color television. In the background there are some glass doors. In the top there is a fan. There is a white color roof. | [
{
"synset_id": "rug.n.01",
"synonyms": [
"carpeting",
"rug",
"carpet"
],
"nearest_ancestor": "furnishing.n.02",
"supersense": "noun.artifact",
"counterfactual": [
{
"human_annotation": "you mat",
"candidate": [
"mat on",
"mats",
... | [
{
"human_annotation": "The living area centers around the couch and rug, creating a cozy space for relaxation. The couch faces the television, inviting viewers to unwind and enjoy entertainment. This arrangement highlights comfort and functionality in the room's design.",
"region_ids": [
"2368643_0",
... |
2368627 | [
{
"id": "2368627_0",
"color": "red",
"x": 271.8869357638889,
"y": 103.630859375,
"width": 228.11306423611111,
"height": 260.9641927083333,
"captions": [
{
"caption": "pizza toppings of jalapeno peppers and sausage",
"counterfactual_caption": "lasagna toppings of jal... | The image is taken in zoom in mode. There are food items placed on a table. There are three pizzas. Two in the plates and one in the box. There is a paper on the table. Two spoons in the plate. In the background there are two people. Floor is also visible in the background. | [
{
"synset_id": "jalapeno.n.02",
"synonyms": [
"Jalapeno Peppers",
"jalapeno pepper",
"jalapeno"
],
"nearest_ancestor": "chili.n.02",
"supersense": "noun.food",
"counterfactual": [
{
"human_annotation": "jalapeno pepper",
"candidate": [
"red p... | [
{
"human_annotation": "This image highlight a spicy pizza with jalapenos, placed on a table. This combination emphasizes the hot, flavorful toppings and the casual dining setup, creating a vibrant focal point in the image.",
"region_ids": [
"2368627_0",
"2368627_3"
]
},
{
"human_anno... |
2368276 | [
{
"id": "2368276_0",
"color": "red",
"x": 88.001953125,
"y": 34.630859375,
"width": 147,
"height": 198,
"captions": [
{
"caption": "a number on the person's chest",
"counterfactual_caption": "a sticker on the person's chest"
},
{
"caption": "numb... | In this picture I can see that there are two ski skaters holding the ski sticks in there hands. This person is wearing a black and white shirt. He is wearing glasses. The person beside him white and blue. There is a bag with them. | [
{
"synset_id": "numeral.n.01",
"synonyms": [
"number",
"numeral",
"numbers"
],
"nearest_ancestor": "symbol.n.01",
"supersense": "noun.communication",
"counterfactual": [
{
"human_annotation": "stick",
"candidate": [
"wooden sticks",
... | [
{
"human_annotation": "This image highlight two young men engaged in skiing, equipped with helmets and goggles. Their dynamic posture and gear suggest a shared activity, emphasizing the thrill and focus required in this snowy environment.",
"region_ids": [
"2368276_0",
"2368276_1",
"236827... |
2367888 | [
{
"id": "2367888_0",
"color": "red",
"x": 11.001953125,
"y": 46.630859375,
"width": 150,
"height": 325,
"captions": [
{
"caption": "a boy playing some games in the tv",
"counterfactual_caption": "a figure playing some games in the tv"
},
{
"capti... | In this image, we can see a television, some players. Some items are placed on the wooden table. There are so many wires. There is a wooden floor here. Right side, we can see some black color, rod, wall. Left side, a boy is standing and hold something in his hand. Background, we can see glass window, curtains, flag, bu... | [
{
"synset_id": "game.n.01",
"synonyms": [
"games",
"game"
],
"nearest_ancestor": "activity.n.01",
"supersense": "noun.act",
"counterfactual": []
},
{
"synset_id": "male_child.n.01",
"synonyms": [
"Boy",
"boy",
"male child"
],
"nearest_ancesto... | [
{
"human_annotation": "The young boy, engaged with the video game displayed on the television, creates an interactive scene. His movements are mirrored by the screen, emphasizing the dynamic connection between the child and the electronic display.",
"region_ids": [
"2367888_0",
"2367888_3"
]... |
2367165 | [
{
"id": "2367165_0",
"color": "red",
"x": 19.5,
"y": 127.79165649414062,
"width": 453,
"height": 59,
"captions": [
{
"caption": "pool chairs and tables in a row",
"counterfactual_caption": "pool couch and tables in a row"
},
{
"caption": "the poo... | This image is clicked outside near a swimming pool. There are a few chairs which are in yellow color. There is table. Behind table there is a tree. There is an umbrella behind. There is a wall hanging with a few flowers. | [
{
"synset_id": "row.n.01",
"synonyms": [
"row"
],
"nearest_ancestor": "line.n.01",
"supersense": "noun.group",
"counterfactual": []
},
{
"synset_id": "table.n.02",
"synonyms": [
"table",
"tables"
],
"nearest_ancestor": "furniture.n.01",
"supersense":... | [
{
"human_annotation": "This image highlight a relaxing poolside area with chairs and a table, adjacent to the shimmering swimming pool. This setup creates a perfect spot for lounging and enjoying the water's reflection.",
"region_ids": [
"2367165_0",
"2367165_2"
]
},
{
"human_annotat... |
2367125 | [
{
"id": "2367125_0",
"color": "red",
"x": 384,
"y": 205.52499389648438,
"width": 38,
"height": 35,
"captions": [
{
"caption": "people riding on the back on an elephant",
"counterfactual_caption": "figure riding on the back on an elephant"
},
{
"c... | In this picture we can see some animals are walking on the grass. There are full of trees and grass. | [
{
"synset_id": "back.n.01",
"synonyms": [
"back",
"dorsum"
],
"nearest_ancestor": "body_part.n.01",
"supersense": "noun.body",
"counterfactual": []
},
{
"synset_id": "elephant.n.01",
"synonyms": [
"elephants",
"elephant"
],
"nearest_ancestor": "pac... | [
{
"human_annotation": "The scene captures a harmonious blend of nature and life, with people interacting closely with elephants. This interaction occurs near a house, suggesting a rural setting where humans and animals coexist peacefully.",
"region_ids": [
"2367125_0",
"2367125_3",
"236712... |
2365895 | [
{
"id": "2365895_0",
"color": "red",
"x": 177.5,
"y": 89.79165649414062,
"width": 108,
"height": 64,
"captions": [
{
"caption": "a television screen on the wall",
"counterfactual_caption": "a computer monitor screen on the wall"
},
{
"caption": "... | In the center of the image we can see a screen. In the background there is a wall. We can see glass doors. At the top there are lights. | [
{
"synset_id": "baby.n.01",
"synonyms": [
"baby",
"babe",
"infant"
],
"nearest_ancestor": "child.n.02",
"supersense": "noun.person",
"counterfactual": [
{
"human_annotation": "figure",
"candidate": [
"digits",
"figure",
"d... | [
{
"human_annotation": "This image highlight a television screen displaying a photo, set against a backdrop of glass doors. This combination emphasizes the interaction between digital media and architectural elements, creating a modern, reflective atmosphere.",
"region_ids": [
"2365895_0",
"23658... |
2365544 | [
{
"id": "2365544_0",
"color": "red",
"x": 1.5,
"y": -0.208343505859375,
"width": 495,
"height": 337,
"captions": [
{
"caption": "person's right hand on keyboard",
"counterfactual_caption": "person's right prosthetic hand on keyboard"
},
{
"captio... | In this image I can see keyboard of laptop. On it I can see hands of a person. Here I can see something is written. Under it I can see a blue colour cloth. I can also see a white wire over here. | [
{
"synset_id": "person.n.01",
"synonyms": [
"somebody",
"person",
"individual",
"mortal",
"someone",
"soul"
],
"nearest_ancestor": "organism.n.01",
"supersense": "noun.Tops",
"counterfactual": [
{
"human_annotation": "figure",
"candid... | [
{
"human_annotation": "This image highlight the interaction between the person and the laptop. The hands and fingers are actively engaged with the keyboard, emphasizing the connection between human input and technology.",
"region_ids": [
"2365544_0",
"2365544_1"
]
},
{
"human_annotat... |
2332982 | [
{
"id": "2332982_0",
"color": "red",
"x": 196.2,
"y": 98,
"width": 213.9448974609375,
"height": 263.8607940673828,
"captions": [
{
"caption": "a person skating on snow",
"counterfactual_caption": "a person skating on cotton"
},
{
"caption": "a ma... | This picture describes about men wearing a black color jacket , pant, red color sunglasses and a cap doing skating board holding a two stick on the hand. Behind we can see many trees and in the front we can see snow. | [
{
"synset_id": "person.n.01",
"synonyms": [
"person",
"soul",
"mortal",
"someone",
"individual",
"somebody"
],
"nearest_ancestor": "organism.n.01",
"supersense": "noun.Tops",
"counterfactual": [
{
"human_annotation": "mannequin",
"can... | [
{
"human_annotation": "Look at a skier navigating through snow, using poles for balance. The skier's attire, including a cap and shoes, contrasts with the snowy background, emphasizing the dynamic movement against the serene winter landscape.",
"region_ids": [
"2332982_0",
"2332982_2"
]
},... |
LACE-Bench: Localism-Aware Compositionality Evaluation Benchmark for Vision-Language Models
LACE-Bench is a benchmark for evaluating localism-aware compositionality in vision-language models (VLMs) — the ability to selectively integrate local region-level semantics with global scene-level understanding. It comprises two complementary tasks: LoGoCap and MMComE.
Dataset Card
| Field | Info |
|---|---|
| Tasks | LoGoCap (Local & Global Compositional Captioning), MMComE (Multimodal Compositional Knowledge Editing) |
| Modality | Vision-Language |
| Split | Train (9,874 images) / Test (2,183 images) |
| Total | 12,057 images |
| Image Source | Visual Genome |
| License | CC BY 4.0 |
Data Fields
Each record corresponds to one image and contains the following fields:
| Field | Type | Description |
|---|---|---|
image_id |
string | Visual Genome image identifier |
regions |
list[object] | Annotated bounding box regions (atomic) |
narratives |
string | Description of the full image |
keywords |
list[object] | Key noun concepts grounded in WordNet |
relation_centric_regions |
list[object] | Groups of region IDs with a human-written relational annotation |
regions
Each atomic region corresponds to a single object marked with a distinct colored bounding box.
| Key | Type | Description |
|---|---|---|
id |
string | Region identifier ({image_id}_{region_index}) |
color |
string | Bounding box color used for visual grounding (aqua / yellow / lime / red / blue / orange / magenta) |
x, y |
float | Top-left corner coordinates of the bounding box |
width, height |
float | Width and height of the bounding box |
captions |
list[object] | Human-annotated region-level captions (see below) |
object_ids |
list[int] | Linked object IDs from Visual Genome |
relationships |
list[object] | Scene graph relationships associated with this region (see below) |
regions[].captions
| Key | Type | Description |
|---|---|---|
caption |
string | Original human-written caption for the region (e.g. "the tall clock on the street") |
counterfactual_caption |
string | Minimally edited caption where one noun is replaced with a plausible but incorrect alternative (e.g. "the tall dart board on the street") |
regions[].relationships
| Key | Type | Description |
|---|---|---|
relationship_id |
int | Visual Genome relationship identifier |
predicate |
string | Relation predicate between subject and object (e.g. "on") |
synsets |
list[string] | WordNet synsets for the predicate (e.g. ["along.r.01"]) |
subject_id |
int | Visual Genome object ID of the subject |
object_id |
int | Visual Genome object ID of the object |
keywords
Each entry represents a key noun concept extracted from region captions and grounded in WordNet.
| Key | Type | Description |
|---|---|---|
synset_id |
string | WordNet synset identifier (e.g. clock.n.01) |
synonyms |
list[string] | Lemma names belonging to this synset (e.g. ["clock"]) |
nearest_ancestor |
string | Closest hypernym synset in the WordNet hierarchy (e.g. timepiece.n.01) |
supersense |
string | Broad semantic category from WordNet lexicographer files (e.g. noun.artifact, noun.person) |
counterfactual |
list[object] | Human-annotated counterfactual substitutions for this concept (see below) |
keywords[].counterfactual
| Key | Type | Description |
|---|---|---|
human_annotation |
string | Plausible but incorrect substitute chosen by a human annotator (e.g. "dart board") |
candidate |
list[string] | Candidate substitutions presented to the annotator for selection |
counterfactualis empty ([]) for concepts where no counterfactual annotation was collected.
relation_centric_regions
Each entry groups multiple atomic regions and provides a human-written description of the relational context among them.
| Key | Type | Description |
|---|---|---|
human_annotation |
string | Free-form description of the spatial or semantic relationship among the grouped regions (e.g. "The central clock tower... stands as a focal point against the backdrop of the building's pillars.") |
region_ids |
list[string] | IDs of the atomic regions involved in this relational group (e.g. ["2358647_0", "2358647_1"]) |
keyword_dict Config
In addition to the per-image records (default config), the dataset ships a keyword_dict config that maps each WordNet synset ID to the list of surface phrases observed for that concept across the corresponding split's region captions. These dictionaries are useful for keyword-based lookup, counterfactual phrase matching, and lexical normalization of mentions.
Files
data/train_keyword_dict.parquetdata/test_keyword_dict.parquet
Schema
| Field | Type | Description |
|---|---|---|
synset_id |
string | WordNet synset identifier (e.g. tree.n.01), matching keywords[].synset_id in the default config |
phrases |
list[string] | All distinct surface phrases (synonyms, plural forms, modifier-noun variants, casing variants) observed for the synset in that split |
Example rows
synset_id |
phrases |
|---|---|
leaf.n.01 |
["leaves", "foliage", "leaf", "banana leaf", "dried leaves", "green leaves", ...] |
tree.n.01 |
["tree", "trunk", "evergreen tree", "pine trees", "fir tree", ...] |
bus.n.01 |
["bus", "city bus", "double decker bus", "motorbus", "coach", ...] |
Loading
from datasets import load_dataset
ds = load_dataset("lacebench/LACE-Bench", "keyword_dict")
Citation
@dataset{anonymous2026lacebench,
title = {LACE-Bench: Localism-Aware Compositionality Evaluation Benchmark for Vision-Language Models},
author = {Anonymous},
year = {2026},
}
- Downloads last month
- 18