RichardC0216 commited on
Commit
372a1b9
·
verified ·
1 Parent(s): 59f0912

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +148 -2
README.md CHANGED
@@ -11,5 +11,151 @@ configs:
11
  path: agent_profiles.parquet
12
  ---
13
 
14
- blank README
15
- will update when published
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  path: agent_profiles.parquet
12
  ---
13
 
14
+ # CapNav: Benchmarking Vision Language Models on Capability-conditioned Indoor Navigation
15
+
16
+ CapNav is a benchmark for Vision Language Models evaluating on **capability-conditioned navigation reasoning** in indoor environments.
17
+ The benchmark focuses on determining whether an embodied agent with specific physical constraints and abilities
18
+ can navigate from a start area to a target area within a complex indoor scene.
19
+
20
+ This repository contains **two complementary datasets**:
21
+ 1. The main **CapNav Benchmark Dataset**, which provides navigation questions paired with scene graph nodes and agent identifiers.
22
+ 2. The **Agent Profiles Dataset**, which defines the physical dimensions and capabilities of each agent type.
23
+
24
+ ---
25
+
26
+ ## Quickstart
27
+
28
+ ```python
29
+ from datasets import load_dataset
30
+
31
+ capnav = load_dataset("RichardC0216/CapNav", "capnav_v0", split="train")
32
+ agents = load_dataset("RichardC0216/CapNav", "agent_profiles", split="train")
33
+
34
+ print(len(capnav), capnav.column_names)
35
+ print(len(agents), agents.column_names)
36
+ ```
37
+
38
+ ## Dataset Overview
39
+
40
+ ### 1. CapNav Benchmark Dataset (`capnav_v0`)
41
+ - **File:** `capnav_v0_with_answer.parquet`
42
+ - **Format:** Parquet
43
+ - **Split:** `train`
44
+ - **Rows:** 2,330 (question × agent combinations)
45
+ This dataset contains natural language navigation questions grounded in indoor scenes.
46
+ Each question is evaluated under a specific agent profile, enabling systematic analysis
47
+ of how agent capabilities affect navigation feasibility.
48
+
49
+ #### Data Fields
50
+ Below one can find the description of each field in the dataset.
51
+ - **`question_id` (str)**
52
+ Unique identifier for each question–agent pair.
53
+ - **`question` (str)**
54
+ Natural language navigation question describing whether an agent can move from a start area to a goal area.
55
+ - **`scene_id` (str)**
56
+ Identifier of the indoor scene (e.g., `HM3D00000`, `MP3D00027`).
57
+ - **`scene_type` (str)**
58
+ High-level category of the scene (e.g., `home`).
59
+ - **`scene_nodes` (list[dict])**
60
+ List of scene graph nodes available in the environment.
61
+ Each node contains:
62
+ - `node_id` (str): Unique identifier of the node
63
+ - `name` (str): Human-readable node name (e.g., room or area label)
64
+ - **`agent_name` (str)**
65
+ Name of the agent under which the navigation question should be evaluated.
66
+ This field links to an entry in the Agent Profiles Dataset.
67
+ - **`answer` ((bool))**
68
+ Binary ground-truth navigability label (`True` / `False`) for the (`question`, `scene`, `agent`) triple.
69
+ > Note: `answer` only provides a **binary feasibility** signal. For scene graphs, route-level traversability,
70
+ > and detailed ground-truth rationale, please refer to the full annotations in `ground_truth/`(see below).
71
+
72
+ ---
73
+
74
+ ### 2. Agent Profiles Dataset
75
+ - **File:** `agent_profiles.parquet`
76
+ - **Format:** Parquet
77
+ - **Rows:** One per agent type
78
+ This dataset defines the physical properties and functional capabilities of each agent
79
+ used in the CapNav benchmark.
80
+
81
+ #### Data Fields
82
+ - **`agent_name` (str)**
83
+ Unique identifier of the agent (e.g., `HUMAN`, `WHEELCHAIR`, `SWEEPER`).
84
+ - **`body_shape` (str)**
85
+ Abstract geometric representation of the agent’s body (e.g., `cylinder`, `box`).
86
+ - **`body_height_m` (float)**
87
+ Agent height in meters.
88
+ - **`body_width_m` (float)**
89
+ Agent width in meters.
90
+ - **`body_depth_m` (float, optional)**
91
+ Agent depth in meters.
92
+ May be `null` for rotationally symmetric agents.
93
+ - **`max_vertical_cross_height_m` (float)**
94
+ Maximum vertical obstacle height the agent can cross.
95
+ - **`can_go_up_or_down_stairs` (bool)**
96
+ Indicates whether the agent is capable of traversing stairs.
97
+ - **`can_operate_elevator` (bool)**
98
+ Indicates whether the agent can operate and use elevators.
99
+ - **`can_open_the_door` (bool)**
100
+ Indicates whether the agent can open doors independently.
101
+ - **`description` (str)**
102
+ Natural language description summarizing the agent’s physical and functional characteristics.
103
+ ---
104
+
105
+ ## Full Ground Truth Annotations (Graphs and Traversability)
106
+ This repository also includes a `ground_truth/` directory that provides the
107
+ **complete ground-truth annotations** used to derive the binary `answer` labels
108
+ in the main CapNav benchmark.
109
+
110
+ While the benchmark dataset exposes only a binary navigability outcome for each
111
+ `(question, scene, agent)` triple, the annotations in `ground_truth/` contain
112
+ the underlying **structural and traversability information** that supports
113
+ more detailed inspection and analysis.
114
+
115
+ Specifically, this directory includes:
116
+ - **Scene graph annotations** (`ground_truth/graphs/`)
117
+ Manually constructed graph representations for each indoor environment,
118
+ encoding nodes, connectivity, and true spatial structure.
119
+ - **Agent-conditioned traversability annotations** (`ground_truth/traverse/`)
120
+ Route-level annotations that specify whether and why a path is traversable
121
+ for a given agent, including agent-specific constraints and failure rationales.
122
+
123
+ The binary `answer` field in the benchmark dataset is a **distilled signal**
124
+ derived from these annotations. Researchers interested in path validity,
125
+ failure cases, or the reasons behind infeasible navigation decisions should
126
+ refer to the files in `ground_truth/`.
127
+
128
+ Detailed descriptions of file formats and annotation semantics can be found in:
129
+ - `ground_truth/README.md`
130
+
131
+ The annotation pipeline, tooling, and quality control procedures are documented
132
+ in the CapNav GitHub repository:
133
+ - https://github.com/Ruiqi-Chen-0216/CapNav
134
+ (see the annotation code and documentation)
135
+
136
+ ---
137
+
138
+ ## Intended Use
139
+ The CapNav benchmark is intended for:
140
+ - Evaluating multimodal or vision-language models on embodied navigation reasoning
141
+ - Studying how physical capabilities affect navigation feasibility
142
+ - Benchmarking agent-aware reasoning and planning systems
143
+ The dataset is **not** intended to provide low-level control signals or precise geometric navigation paths.
144
+
145
+ ---
146
+
147
+ ## License
148
+ - **Dataset:** Creative Commons Attribution 4.0 International (CC BY 4.0)
149
+ ---
150
+
151
+ ## Citation
152
+
153
+ If you find it useful for your research and applications, please cite our paper using this BibTeX:
154
+ ```bibtex
155
+ @article{su2025capnav,
156
+ title={CapNav: Benchmarking Vision Language Models on Capability-conditioned Indoor Navigation},
157
+ author={Su, Xia and Chen, Ruiqi and Liu, Benlin and Ma, Jingwei and Di, Zonglin and Krishna, Ranjay and Froehlich, Jon},
158
+ journal={arXiv preprint arXiv:xxxxxx},
159
+ year={2025}
160
+ }
161
+ ```