TommyBsk commited on
Commit
99849eb
·
verified ·
1 Parent(s): 4df416b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -11
README.md CHANGED
@@ -19,7 +19,7 @@ size_categories:
19
 
20
  ## 📦 Dataset Description
21
 
22
- This repository contains the **test set with human-annotated object captions** used to evaluate captioning consistency in the ICCV 2025 paper.
23
 
24
  The data is collected from simulated indoor environments (Gibson and HM3D), where an agent explores the scene and captures RGB views of objects from different perspectives. Each object is annotated with a **single human-written caption**, verified to describe the object consistently across multiple views.
25
 
@@ -31,21 +31,32 @@ This test set enables rigorous evaluation of object-level image captioning model
31
 
32
  - `images/`:
33
  - `gibson/`:
34
- - `gibson_rg/`: File containing the images collected using random goal exploration policy (.npy)
35
- - `gibson_fr/`: File containing the images collected using frontier exploration policy (.npy)
36
- - `gibson_cla/`: File containing the images collected using CLA exploration policy (.npy)
37
-
38
- - `annotations`:
39
  - `gibson_annotations.csv`
40
  - `hm3d_annotations.csv`
41
- - `filename`: Path to the image file
42
- - `episode_id`: Scene identifier
43
- - `object_id`: Unique object instance ID
44
- - `bbox`: Bounding box coordinates (format: x, y, w, h)
45
- - `caption`: Human-annotated caption for the object
46
 
47
  ---
48
 
 
 
 
 
 
 
 
 
 
 
 
49
  ## 🧪 Intended Use
50
 
51
  This test set is for **evaluation only**.
 
19
 
20
  ## 📦 Dataset Description
21
 
22
+ This repository contains the **test set with human-annotated object captions** used to evaluate captioning consistency in the paper [Embodied Image Captioning (ICCV 2025)](https://arxiv.org/abs/2504.08531).
23
 
24
  The data is collected from simulated indoor environments (Gibson and HM3D), where an agent explores the scene and captures RGB views of objects from different perspectives. Each object is annotated with a **single human-written caption**, verified to describe the object consistently across multiple views.
25
 
 
31
 
32
  - `images/`:
33
  - `gibson/`:
34
+ - `gibson_rg/`: Images collected using a random goal exploration policy (.npz)
35
+ - `gibson_fr/`: Images collected using a frontier exploration policy (.npz)
36
+ - `gibson_cla/`: Images collected using a CLA exploration policy (.npz)
37
+
38
+ - `annotations/`:
39
  - `gibson_annotations.csv`
40
  - `hm3d_annotations.csv`
41
+ - `filename`: Path to the image file
42
+ - `episode_id`: Scene identifier
43
+ - `object_id`: Unique object instance ID
44
+ - `bbox`: Bounding box coordinates (format: x, y, w, h)
45
+ - `caption`: Human-annotated caption for the object
46
 
47
  ---
48
 
49
+ ## ⬇️ How to Download
50
+
51
+ You can download the dataset using the [🤗 Datasets library](https://huggingface.co/docs/datasets/) with the following code:
52
+
53
+ ```python
54
+ from datasets import load_dataset
55
+
56
+ dataset = load_dataset("TommyBsk/Embodied-Captioning")
57
+ ```
58
+
59
+
60
  ## 🧪 Intended Use
61
 
62
  This test set is for **evaluation only**.