Ground Truth Annotations
This directory contains the full ground-truth annotations used to construct the
binary navigability labels (answer) in the main CapNav benchmark.
While the primary benchmark filecapnav_v0_with_answer.parquet
provides binary (True/False) navigability answers for each
(question, scene, agent) triple, the files in this directory expose the
underlying structural and traversability annotations from which those
binary labels are derived.
These annotations offer richer, fine-grained information about scene structure, feasible routes, and failure cases, and are intended for researchers who wish to inspect, verify, or extend the benchmark.
Directory Structure
ground_truth/
├── graphs/
│ └── <scene_id>-graph.json
├── traverse/
│ └── <scene_id>-traverse.json
└── README.md
graphs/
This folder contains scene-level spatial graph annotations for each 3D environment.
Each file corresponds to a single scene and encodes:
- A graph representation of the indoor environment
- Nodes representing semantically meaningful locations or regions
- Edges representing potential transitions between nodes
- Human-annotated connectivity reflecting the true physical layout of the scene
These graphs serve as the structural backbone for all navigability and traversability reasoning in CapNav.
traverse/
This folder contains route-level traversability annotations built on top of the scene graphs.
For each scene, the traverse annotations specify:
- Whether a route between two nodes is traversable for a given agent
- Agent-specific constraints (e.g., body size, ability to climb stairs)
- Valid and invalid edges along candidate paths
- Detailed reasons explaining why traversal is not possible when a route fails
These files provide richer supervision than the binary labels alone, including intermediate decisions and failure rationales.
Relationship to the Main Benchmark
The main benchmark file:
capnav_v0_with_answer.parquet
contains a simplified representation of the ground truth:
- Each row corresponds to a
(question, scene, agent)triple - The
answercolumn stores a binary navigability label - This binary label is derived from the graph and traverse annotations in this directory
In other words:
The binary answers in the benchmark are a distilled view of the more comprehensive ground-truth annotations provided here.
Researchers interested only in benchmarking model accuracy can rely on
the parquet file directly, while those seeking deeper insight into
navigation feasibility, path validity, or agent-specific constraints
should consult the annotations in ground_truth/.
Annotation Process
All graph and traversability annotations in this directory are:
- Manually constructed based on the true geometry and semantics of each scene
- Verified to reflect realistic physical constraints and navigational affordances
- Used as the authoritative source of ground truth in CapNav
Details of the annotation pipeline, including tooling, validation logic, and quality control procedures, are documented in the CapNav repository:
https://github.com/makeabilitylab/CapNav
Please refer specifically to the annotation code and documentation in the repository for implementation details.
Intended Use
This directory is provided to support:
- Transparency and reproducibility of the benchmark
- Analysis of failure cases beyond binary correctness
- Future extensions of CapNav (e.g., multi-step supervision, path prediction)
- Research on agent-specific navigation constraints and embodied reasoning
For most benchmarking use cases, users only need the main parquet file.
The contents of ground_truth/ are optional but recommended for advanced analysis.