RichardC0216 commited on
Commit
96592ce
·
verified ·
1 Parent(s): b5ccb2b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -29
README.md CHANGED
@@ -1,10 +1,19 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
  # CapNav: Capability-Conditioned Indoor Navigation Benchmark
6
 
7
- CapNav is a benchmark for evaluating **capability-conditioned navigation reasoning** in indoor environments.
8
  The benchmark focuses on determining whether an embodied agent with specific physical constraints and abilities
9
  can navigate from a start area to a target area within a complex indoor scene.
10
 
@@ -14,23 +23,31 @@ This repository contains **two complementary datasets**:
14
 
15
  ---
16
 
17
- ## Dataset Overview
 
 
 
 
 
 
18
 
19
- ### 1. CapNav Benchmark Dataset
 
 
 
 
20
 
21
- - **File:** `capnav_v0_no_answer.parquet`
 
22
  - **Format:** Parquet
23
  - **Split:** `train`
24
- - **Rows:** 2,365 (question × agent combinations)
25
-
26
  This dataset contains natural language navigation questions grounded in indoor scenes.
27
  Each question is evaluated under a specific agent profile, enabling systematic analysis
28
  of how agent capabilities affect navigation feasibility.
29
 
30
  #### Data Fields
31
-
32
  Below one can find the description of each field in the dataset.
33
-
34
  - **`question_id` (str)**
35
  Unique identifier for each question–agent pair.
36
  - **`question` (str)**
@@ -47,18 +64,17 @@ Below one can find the description of each field in the dataset.
47
  - **`agent_name` (str)**
48
  Name of the agent under which the navigation question should be evaluated.
49
  This field links to an entry in the Agent Profiles Dataset.
50
- - **`answer` (optional / null)**
51
- Ground-truth navigation feasibility label.
52
- This field is intentionally left empty in the current release and may be populated in future versions.
 
53
 
54
  ---
55
 
56
  ### 2. Agent Profiles Dataset
57
-
58
  - **File:** `agent_profiles.parquet`
59
  - **Format:** Parquet
60
  - **Rows:** One per agent type
61
-
62
  This dataset defines the physical properties and functional capabilities of each agent
63
  used in the CapNav benchmark.
64
 
@@ -86,37 +102,60 @@ used in the CapNav benchmark.
86
  Natural language description summarizing the agent’s physical and functional characteristics.
87
  ---
88
 
89
- ## Relationship Between the Datasets
90
-
91
- Each row in the CapNav Benchmark Dataset references exactly one agent via `agent_name`.
92
- The corresponding agent properties should be retrieved from the Agent Profiles Dataset
93
- before constructing prompts or evaluating navigation feasibility.
94
-
95
- This separation enables:
96
- - Modular benchmark design
97
- - Easy extension with new agent types
98
- - Stable navigation questions across different embodied agents
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
  ---
101
 
102
  ## Intended Use
103
-
104
  The CapNav benchmark is intended for:
105
  - Evaluating multimodal or vision-language models on embodied navigation reasoning
106
  - Studying how physical capabilities affect navigation feasibility
107
  - Benchmarking agent-aware reasoning and planning systems
108
-
109
  The dataset is **not** intended to provide low-level control signals or precise geometric navigation paths.
110
 
111
  ---
112
 
113
  ## License
114
-
115
  - **Dataset:** Creative Commons Attribution 4.0 International (CC BY 4.0)
116
-
117
  ---
118
 
119
  ## Citation
120
 
121
- If you use this dataset in your research, please cite:
122
-
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ configs:
4
+ - config_name: capnav_v0
5
+ data_files:
6
+ - split: train
7
+ path: capnav_v0_with_answer.parquet
8
+ - config_name: agent_profiles
9
+ data_files:
10
+ - split: train
11
+ path: agent_profiles.parquet
12
  ---
13
 
14
  # CapNav: Capability-Conditioned Indoor Navigation Benchmark
15
 
16
+ CapNav is a benchmark for evaluating **capability-conditioned navigation reasoning** in indoor environments.
17
  The benchmark focuses on determining whether an embodied agent with specific physical constraints and abilities
18
  can navigate from a start area to a target area within a complex indoor scene.
19
 
 
23
 
24
  ---
25
 
26
+ ## Quickstart
27
+
28
+ ```python
29
+ from datasets import load_dataset
30
+
31
+ capnav = load_dataset("RichardC0216/CapNav", "capnav_v0", split="train")
32
+ agents = load_dataset("RichardC0216/CapNav", "agent_profiles", split="train")
33
 
34
+ print(len(capnav), capnav.column_names)
35
+ print(len(agents), agents.column_names)
36
+ ```
37
+
38
+ ## Dataset Overview
39
 
40
+ ### 1. CapNav Benchmark Dataset (`capnav_v0`)
41
+ - **File:** `capnav_v0_with_answer.parquet`
42
  - **Format:** Parquet
43
  - **Split:** `train`
44
+ - **Rows:** 2,330 (question × agent combinations)
 
45
  This dataset contains natural language navigation questions grounded in indoor scenes.
46
  Each question is evaluated under a specific agent profile, enabling systematic analysis
47
  of how agent capabilities affect navigation feasibility.
48
 
49
  #### Data Fields
 
50
  Below one can find the description of each field in the dataset.
 
51
  - **`question_id` (str)**
52
  Unique identifier for each question–agent pair.
53
  - **`question` (str)**
 
64
  - **`agent_name` (str)**
65
  Name of the agent under which the navigation question should be evaluated.
66
  This field links to an entry in the Agent Profiles Dataset.
67
+ - **`answer` ((bool))**
68
+ Binary ground-truth navigability label (`True` / `False`) for the (`question`, `scene`, `agent`) triple.
69
+ > Note: `answer` only provides a **binary feasibility** signal. For scene graphs, route-level traversability,
70
+ > and detailed ground-truth rationale, please refer to the full annotations in `ground_truth/`(see below).
71
 
72
  ---
73
 
74
  ### 2. Agent Profiles Dataset
 
75
  - **File:** `agent_profiles.parquet`
76
  - **Format:** Parquet
77
  - **Rows:** One per agent type
 
78
  This dataset defines the physical properties and functional capabilities of each agent
79
  used in the CapNav benchmark.
80
 
 
102
  Natural language description summarizing the agent’s physical and functional characteristics.
103
  ---
104
 
105
+ ## Full Ground Truth Annotations (Graphs and Traversability)
106
+ This repository also includes a `ground_truth/` directory that provides the
107
+ **complete ground-truth annotations** used to derive the binary `answer` labels
108
+ in the main CapNav benchmark.
109
+
110
+ While the benchmark dataset exposes only a binary navigability outcome for each
111
+ `(question, scene, agent)` triple, the annotations in `ground_truth/` contain
112
+ the underlying **structural and traversability information** that supports
113
+ more detailed inspection and analysis.
114
+
115
+ Specifically, this directory includes:
116
+ - **Scene graph annotations** (`ground_truth/graphs/`)
117
+ Manually constructed graph representations for each indoor environment,
118
+ encoding nodes, connectivity, and true spatial structure.
119
+ - **Agent-conditioned traversability annotations** (`ground_truth/traverse/`)
120
+ Route-level annotations that specify whether and why a path is traversable
121
+ for a given agent, including agent-specific constraints and failure rationales.
122
+
123
+ The binary `answer` field in the benchmark dataset is a **distilled signal**
124
+ derived from these annotations. Researchers interested in path validity,
125
+ failure cases, or the reasons behind infeasible navigation decisions should
126
+ refer to the files in `ground_truth/`.
127
+
128
+ Detailed descriptions of file formats and annotation semantics can be found in:
129
+ - `ground_truth/README.md`
130
+
131
+ The annotation pipeline, tooling, and quality control procedures are documented
132
+ in the CapNav GitHub repository:
133
+ - https://github.com/Ruiqi-Chen-0216/CapNav
134
+ (see the annotation code and documentation)
135
 
136
  ---
137
 
138
  ## Intended Use
 
139
  The CapNav benchmark is intended for:
140
  - Evaluating multimodal or vision-language models on embodied navigation reasoning
141
  - Studying how physical capabilities affect navigation feasibility
142
  - Benchmarking agent-aware reasoning and planning systems
 
143
  The dataset is **not** intended to provide low-level control signals or precise geometric navigation paths.
144
 
145
  ---
146
 
147
  ## License
 
148
  - **Dataset:** Creative Commons Attribution 4.0 International (CC BY 4.0)
 
149
  ---
150
 
151
  ## Citation
152
 
153
+ If you find it useful for your research and applications, please cite our paper using this BibTeX:
154
+ ```bibtex
155
+ @article{su2025capnav,
156
+ title={CapNav: Benchmarking Vision Language Models on Capability-conditioned Indoor Navigation},
157
+ author={Su, Xia and Chen, Ruiqi and Liu, Benlin and Ma, Jingwei and Di, Zonglin and Krishna, Ranjay and Froehlich, Jon},
158
+ journal={arXiv preprint arXiv:xxxxxx},
159
+ year={2025}
160
+ }
161
+ ```