Embodied-Captioning / README.md
TommyBsk's picture
Add GitHub repository link (#3)
4e9229e verified
---
language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- image-to-text
---
# Embodied Image Captioning – Manually Annotated Test Set
**Paper**: [Embodied Image Captioning: Self-supervised Learning Agents for Spatially Coherent Image Descriptions (ICCV 2025)](https://arxiv.org/abs/2504.08531)
**Authors**: Tommaso Galliena, Tommaso Apicella, Stefano Rosa, Pietro Morerio, Alessio Del Bue, Lorenzo Natale
**Affiliations**: Italian Institute of Technology (IIT), University of Genoa
**Project Website**: [https://hsp-iit.github.io/embodied-captioning](https://hsp-iit.github.io/embodied-captioning)
**Code**: [https://github.com/hsp-iit/embodied-captioning](https://github.com/hsp-iit/embodied-captioning)
---
## πŸ“¦ Dataset Description
This repository contains the **test set with human-annotated object captions** used to evaluate captioning consistency in the paper [Embodied Image Captioning (ICCV 2025)](https://arxiv.org/abs/2504.08531).
The data is collected from simulated indoor environments (Gibson and HM3D), where an agent explores the scene and captures RGB views of objects from different perspectives. Each object is annotated with a **single human-written caption**, verified to describe the object consistently across multiple views.
This test set enables rigorous evaluation of object-level image captioning models and their robustness to viewpoint changes.
---
## πŸ“ Contents
- `images/`:
- `gibson/`:
- `gibson_rg/`: Images collected using a random goal exploration policy (.npz)
- `gibson_fr/`: Images collected using a frontier exploration policy (.npz)
- `gibson_cla/`: Images collected using a CLA exploration policy (.npz)
- `annotations/`:
- `gibson_annotations.csv`
- `hm3d_annotations.csv`
- `filename`: Path to the image file
- `episode_id`: Scene identifier
- `object_id`: Unique object instance ID
- `bbox`: Bounding box coordinates (format: x, y, w, h)
- `caption`: Human-annotated caption for the object
---
## ⬇️ How to Download
You can download the dataset using the [πŸ€— Datasets library](https://huggingface.co/docs/datasets/) with the following code:
```python
from datasets import load_dataset
dataset = load_dataset("TommyBsk/Embodied-Captioning")
```
## πŸ§ͺ Intended Use
This test set is for **evaluation only**.
You can use it to:
- Benchmark object-centric captioning models (e.g., BLIP2, CoCa, Florence2)
- Evaluate **caption consistency** across different viewpoints
- Compute semantic similarity metrics such as:
- BLEU-4
- ROUGE-L
- METEOR
- CIDEr
- SPICE
- Cosine similarity (SBERT embeddings)
> ⚠️ This dataset **does not** include training data or pseudo-labels. It is **only the test set** with verified human annotations.
---
## πŸ“ˆ Citation
If you use this dataset in your work, please cite:
```bibtex
@inproceedings{galliena2025embodied,
title={Embodied Image Captioning: Self-supervised Learning Agents for Spatially Coherent Image Descriptions},
author={Galliena, Tommaso and Apicella, Tommaso and Rosa, Stefano and Morerio, Pietro and Del Bue, Alessio and Natale, Lorenzo},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year={2025}
}
```