Embodied-Captioning / README.md
TommyBsk's picture
Update README.md
933d663 verified
|
raw
history blame
3.06 kB
metadata
license: mit
task_categories:
  - image-to-text
language:
  - en
size_categories:
  - 10K<n<100K

Embodied Image Captioning – Manually Annotated Test Set

Paper: Embodied Image Captioning: Self-supervised Learning Agents for Spatially Coherent Image Descriptions (ICCV 2025)
Authors: Tommaso Galliena, Tommaso Apicella, Stefano Rosa, Pietro Morerio, Alessio Del Bue, Lorenzo Natale
Affiliations: Italian Institute of Technology (IIT), University of Genoa
Project Website: https://hsp-iit.github.io/embodied-captioning


📦 Dataset Description

This repository contains the test set with human-annotated object captions used to evaluate captioning consistency in the ICCV 2025 paper.

The data is collected from simulated indoor environments (Gibson and HM3D), where an agent explores the scene and captures RGB views of objects from different perspectives. Each object is annotated with a single human-written caption, verified to describe the object consistently across multiple views.

This test set enables rigorous evaluation of object-level image captioning models and their robustness to viewpoint changes.


📁 Contents

  • images/:

    • gibson/:
      • gibson_rg/: File containing the images collected using random goal exploration policy (.npy)
      • gibson_fr/: File containing the images collected using frontier exploration policy (.npy)
      • gibson_cla/: File containing the images collected using CLA exploration policy (.npy)
  • annotations.csv: File containing metadata and ground-truth captions with columns:

    • filename: Image file name
    • scene_id: Scene/environment identifier
    • object_id: Unique object instance ID
    • view_id: Viewpoint index
    • bbox: Bounding box coordinates (format: x, y, w, h)
    • policy: Exploration policy used (random, frontier, or CLA)
    • caption: Human-annotated caption for the object

🧪 Intended Use

This test set is for evaluation only.

You can use it to:

  • Benchmark object-centric captioning models (e.g., BLIP2, CoCa, Florence2)
  • Evaluate caption consistency across different viewpoints
  • Compute semantic similarity metrics such as:
    • BLEU-4
    • ROUGE-L
    • METEOR
    • CIDEr
    • SPICE
    • Cosine similarity (SBERT embeddings)

⚠️ This dataset does not include training data or pseudo-labels. It is only the test set with verified human annotations.


📈 Citation

If you use this dataset in your work, please cite:

@misc{galliena2025embodiedimagecaptioningselfsupervised,
      title={Embodied Image Captioning: Self-supervised Learning Agents for Spatially Coherent Image Descriptions}, 
      author={Tommaso Galliena and Tommaso Apicella and Stefano Rosa and Pietro Morerio and Alessio Del Bue and Lorenzo Natale},
      year={2025},
      eprint={2504.08531},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.08531}, 
}