Datasets:
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-to-text
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
size_categories:
|
| 8 |
+
- 10K<n<100K
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# Embodied Image Captioning – Manually Annotated Test Set
|
| 12 |
+
|
| 13 |
+
**Paper**: [Embodied Image Captioning: Self-supervised Learning Agents for Spatially Coherent Image Descriptions (ICCV 2025)](https://arxiv.org/abs/2504.08531)
|
| 14 |
+
**Authors**: Tommaso Galliena, Tommaso Apicella, Stefano Rosa, Pietro Morerio, Alessio Del Bue, Lorenzo Natale
|
| 15 |
+
**Affiliations**: Italian Institute of Technology (IIT), University of Genoa
|
| 16 |
+
**Project Website**: [https://hsp-iit.github.io/embodied-captioning](https://hsp-iit.github.io/embodied-captioning)
|
| 17 |
+
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
## 📦 Dataset Description
|
| 21 |
+
|
| 22 |
+
This repository contains the **test set with human-annotated object captions** used to evaluate captioning consistency in the ICCV 2025 paper.
|
| 23 |
+
|
| 24 |
+
The data is collected from simulated indoor environments (Gibson and HM3D), where an agent explores the scene and captures RGB views of objects from different perspectives. Each object is annotated with a **single human-written caption**, verified to describe the object consistently across multiple views.
|
| 25 |
+
|
| 26 |
+
This test set enables rigorous evaluation of object-level image captioning models and their robustness to viewpoint changes.
|
| 27 |
+
|
| 28 |
+
---
|
| 29 |
+
|
| 30 |
+
## 📁 Contents
|
| 31 |
+
|
| 32 |
+
- `images/`: Directory of cropped RGB images of objects.
|
| 33 |
+
- `annotations.csv`: File containing metadata and ground-truth captions with columns:
|
| 34 |
+
- `filename`: Image file name
|
| 35 |
+
- `scene_id`: Scene/environment identifier
|
| 36 |
+
- `object_id`: Unique object instance ID
|
| 37 |
+
- `view_id`: Viewpoint index
|
| 38 |
+
- `bbox`: Bounding box coordinates (format: x, y, w, h)
|
| 39 |
+
- `policy`: Exploration policy used (`random`, `frontier`, or `CLA`)
|
| 40 |
+
- `caption`: Human-annotated caption for the object
|
| 41 |
+
|
| 42 |
+
---
|
| 43 |
+
|
| 44 |
+
## 🧪 Intended Use
|
| 45 |
+
|
| 46 |
+
This test set is for **evaluation only**.
|
| 47 |
+
|
| 48 |
+
You can use it to:
|
| 49 |
+
|
| 50 |
+
- Benchmark object-centric captioning models (e.g., BLIP2, CoCa, Florence2)
|
| 51 |
+
- Evaluate **caption consistency** across different viewpoints
|
| 52 |
+
- Compute semantic similarity metrics such as:
|
| 53 |
+
- BLEU-4
|
| 54 |
+
- ROUGE-L
|
| 55 |
+
- METEOR
|
| 56 |
+
- CIDEr
|
| 57 |
+
- SPICE
|
| 58 |
+
- Cosine similarity (SBERT embeddings)
|
| 59 |
+
|
| 60 |
+
> ⚠️ This dataset **does not** include training data or pseudo-labels. It is **only the test set** with verified human annotations.
|
| 61 |
+
|
| 62 |
+
---
|
| 63 |
+
|
| 64 |
+
## 📈 Citation
|
| 65 |
+
|
| 66 |
+
If you use this dataset in your work, please cite:
|
| 67 |
+
|
| 68 |
+
```bibtex
|
| 69 |
+
@misc{galliena2025embodiedimagecaptioningselfsupervised,
|
| 70 |
+
title={Embodied Image Captioning: Self-supervised Learning Agents for Spatially Coherent Image Descriptions},
|
| 71 |
+
author={Tommaso Galliena and Tommaso Apicella and Stefano Rosa and Pietro Morerio and Alessio Del Bue and Lorenzo Natale},
|
| 72 |
+
year={2025},
|
| 73 |
+
eprint={2504.08531},
|
| 74 |
+
archivePrefix={arXiv},
|
| 75 |
+
primaryClass={cs.CV},
|
| 76 |
+
url={https://arxiv.org/abs/2504.08531},
|
| 77 |
+
}
|