| | --- |
| | license: cc-by-4.0 |
| | configs: |
| | - config_name: capnav_v0 |
| | data_files: |
| | - split: train |
| | path: capnav_v0_with_answer.parquet |
| | - config_name: agent_profiles |
| | data_files: |
| | - split: train |
| | path: agent_profiles.parquet |
| | --- |
| | |
| | # CapNav: Benchmarking Vision Language Models on Capability-conditioned Indoor Navigation |
| |
|
| | CapNav is a benchmark for Vision Language Models evaluating on **capability-conditioned navigation reasoning** in indoor environments. |
| | The benchmark focuses on determining whether an embodied agent with specific physical constraints and abilities |
| | can navigate from a start area to a target area within a complex indoor scene. |
| |
|
| | This repository contains **two complementary datasets**: |
| | 1. The main **CapNav Benchmark Dataset**, which provides navigation questions paired with scene graph nodes and agent identifiers. |
| | 2. The **Agent Profiles Dataset**, which defines the physical dimensions and capabilities of each agent type. |
| |
|
| | --- |
| |
|
| | ## Quickstart |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | capnav = load_dataset("RichardC0216/CapNav", "capnav_v0", split="train") |
| | agents = load_dataset("RichardC0216/CapNav", "agent_profiles", split="train") |
| | |
| | print(len(capnav), capnav.column_names) |
| | print(len(agents), agents.column_names) |
| | ``` |
| |
|
| | ## Dataset Overview |
| |
|
| | ### 1. CapNav Benchmark Dataset (`capnav_v0`) |
| | - **File:** `capnav_v0_with_answer.parquet` |
| | - **Format:** Parquet |
| | - **Split:** `train` |
| | - **Rows:** 2,330 (question × agent combinations) |
| | This dataset contains natural language navigation questions grounded in indoor scenes. |
| | Each question is evaluated under a specific agent profile, enabling systematic analysis |
| | of how agent capabilities affect navigation feasibility. |
| |
|
| | #### Data Fields |
| | Below one can find the description of each field in the dataset. |
| | - **`question_id` (str)** |
| | Unique identifier for each question–agent pair. |
| | - **`question` (str)** |
| | Natural language navigation question describing whether an agent can move from a start area to a goal area. |
| | - **`scene_id` (str)** |
| | Identifier of the indoor scene (e.g., `HM3D00000`, `MP3D00027`). |
| | - **`scene_type` (str)** |
| | High-level category of the scene (e.g., `home`). |
| | - **`scene_nodes` (list[dict])** |
| | List of scene graph nodes available in the environment. |
| | Each node contains: |
| | - `node_id` (str): Unique identifier of the node |
| | - `name` (str): Human-readable node name (e.g., room or area label) |
| | - **`agent_name` (str)** |
| | Name of the agent under which the navigation question should be evaluated. |
| | This field links to an entry in the Agent Profiles Dataset. |
| | - **`answer` ((bool))** |
| | Binary ground-truth navigability label (`True` / `False`) for the (`question`, `scene`, `agent`) triple. |
| | > Note: `answer` only provides a **binary feasibility** signal. For scene graphs, route-level traversability, |
| | > and detailed ground-truth rationale, please refer to the full annotations in `ground_truth/`(see below). |
| | |
| | --- |
| | |
| | ### 2. Agent Profiles Dataset |
| | - **File:** `agent_profiles.parquet` |
| | - **Format:** Parquet |
| | - **Rows:** One per agent type |
| | This dataset defines the physical properties and functional capabilities of each agent |
| | used in the CapNav benchmark. |
| |
|
| | #### Data Fields |
| | - **`agent_name` (str)** |
| | Unique identifier of the agent (e.g., `HUMAN`, `WHEELCHAIR`, `SWEEPER`). |
| | - **`body_shape` (str)** |
| | Abstract geometric representation of the agent’s body (e.g., `cylinder`, `box`). |
| | - **`body_height_m` (float)** |
| | Agent height in meters. |
| | - **`body_width_m` (float)** |
| | Agent width in meters. |
| | - **`body_depth_m` (float, optional)** |
| | Agent depth in meters. |
| | May be `null` for rotationally symmetric agents. |
| | - **`max_vertical_cross_height_m` (float)** |
| | Maximum vertical obstacle height the agent can cross. |
| | - **`can_go_up_or_down_stairs` (bool)** |
| | Indicates whether the agent is capable of traversing stairs. |
| | - **`can_operate_elevator` (bool)** |
| | Indicates whether the agent can operate and use elevators. |
| | - **`can_open_the_door` (bool)** |
| | Indicates whether the agent can open doors independently. |
| | - **`description` (str)** |
| | Natural language description summarizing the agent’s physical and functional characteristics. |
| | --- |
| |
|
| | ## Full Ground Truth Annotations (Graphs and Traversability) |
| | This repository also includes a `ground_truth/` directory that provides the |
| | **complete ground-truth annotations** used to derive the binary `answer` labels |
| | in the main CapNav benchmark. |
| |
|
| | While the benchmark dataset exposes only a binary navigability outcome for each |
| | `(question, scene, agent)` triple, the annotations in `ground_truth/` contain |
| | the underlying **structural and traversability information** that supports |
| | more detailed inspection and analysis. |
| |
|
| | Specifically, this directory includes: |
| | - **Scene graph annotations** (`ground_truth/graphs/`) |
| | Manually constructed graph representations for each indoor environment, |
| | encoding nodes, connectivity, and true spatial structure. |
| | - **Agent-conditioned traversability annotations** (`ground_truth/traverse/`) |
| | Route-level annotations that specify whether and why a path is traversable |
| | for a given agent, including agent-specific constraints and failure rationales. |
| |
|
| | The binary `answer` field in the benchmark dataset is a **distilled signal** |
| | derived from these annotations. Researchers interested in path validity, |
| | failure cases, or the reasons behind infeasible navigation decisions should |
| | refer to the files in `ground_truth/`. |
| |
|
| | Detailed descriptions of file formats and annotation semantics can be found in: |
| | - `ground_truth/README.md` |
| |
|
| | The annotation pipeline, tooling, and quality control procedures are documented |
| | in the CapNav GitHub repository: |
| | - https://github.com/makeabilitylab/CapNav |
| | (see the annotation code and documentation) |
| |
|
| | --- |
| |
|
| | ## Intended Use |
| | The CapNav benchmark is intended for: |
| | - Evaluating multimodal or vision-language models on embodied navigation reasoning |
| | - Studying how physical capabilities affect navigation feasibility |
| | - Benchmarking agent-aware reasoning and planning systems |
| | The dataset is **not** intended to provide low-level control signals or precise geometric navigation paths. |
| |
|
| | --- |
| |
|
| | ## License |
| | - **Dataset:** Creative Commons Attribution 4.0 International (CC BY 4.0) |
| | --- |
| |
|
| | ## Citation |
| |
|
| | If you find it useful for your research and applications, please cite our paper using this BibTeX: |
| | ```bibtex |
| | @article{su2026capnav, |
| | title={CapNav: Benchmarking Vision Language Models on Capability-conditioned Indoor Navigation}, |
| | author={Su, Xia and Chen, Ruiqi and Liu, Benlin and Ma, Jingwei and Di, Zonglin and Krishna, Ranjay and Froehlich, Jon}, |
| | journal={arXiv preprint arXiv:2602.18424}, |
| | year={2026} |
| | } |
| | ``` |
| |
|