File size: 3,881 Bytes
dbd29ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10e97f6
dbd29ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
# Ground Truth Annotations

This directory contains the **full ground-truth annotations** used to construct the
binary navigability labels (`answer`) in the main CapNav benchmark.

While the primary benchmark file  
`capnav_v0_with_answer.parquet`  
provides **binary (True/False) navigability answers** for each
(question, scene, agent) triple, the files in this directory expose the
**underlying structural and traversability annotations** from which those
binary labels are derived.

These annotations offer **richer, fine-grained information** about scene
structure, feasible routes, and failure cases, and are intended for
researchers who wish to inspect, verify, or extend the benchmark.

---

## Directory Structure

```
ground_truth/
├── graphs/
│ └── <scene_id>-graph.json
├── traverse/
│ └── <scene_id>-traverse.json
└── README.md
```


### `graphs/`

This folder contains **scene-level spatial graph annotations** for each 3D environment.

Each file corresponds to a single scene and encodes:
- A graph representation of the indoor environment
- Nodes representing semantically meaningful locations or regions
- Edges representing potential transitions between nodes
- Human-annotated connectivity reflecting the true physical layout of the scene

These graphs serve as the **structural backbone** for all navigability and
traversability reasoning in CapNav.

---

### `traverse/`

This folder contains **route-level traversability annotations** built on top of the scene graphs.

For each scene, the traverse annotations specify:
- Whether a route between two nodes is traversable for a given agent
- Agent-specific constraints (e.g., body size, ability to climb stairs)
- Valid and invalid edges along candidate paths
- Detailed reasons explaining *why* traversal is not possible when a route fails

These files provide **richer supervision** than the binary labels alone,
including intermediate decisions and failure rationales.

---

## Relationship to the Main Benchmark

The main benchmark file:
```
capnav_v0_with_answer.parquet
```

contains a simplified representation of the ground truth:
- Each row corresponds to a `(question, scene, agent)` triple
- The `answer` column stores a **binary navigability label**
- This binary label is **derived from the graph and traverse annotations**
  in this directory

In other words:

> **The binary answers in the benchmark are a distilled view of the more
> comprehensive ground-truth annotations provided here.**

Researchers interested only in benchmarking model accuracy can rely on
the parquet file directly, while those seeking deeper insight into
navigation feasibility, path validity, or agent-specific constraints
should consult the annotations in `ground_truth/`.

---

## Annotation Process

All graph and traversability annotations in this directory are:
- **Manually constructed** based on the true geometry and semantics of each scene
- Verified to reflect realistic physical constraints and navigational affordances
- Used as the authoritative source of ground truth in CapNav

Details of the annotation pipeline, including tooling, validation logic,
and quality control procedures, are documented in the CapNav repository:
```
https://github.com/makeabilitylab/CapNav
```

Please refer specifically to the **annotation code and documentation**
in the repository for implementation details.

---

## Intended Use

This directory is provided to support:
- Transparency and reproducibility of the benchmark
- Analysis of failure cases beyond binary correctness
- Future extensions of CapNav (e.g., multi-step supervision, path prediction)
- Research on agent-specific navigation constraints and embodied reasoning

For most benchmarking use cases, users only need the main parquet file.
The contents of `ground_truth/` are optional but recommended for advanced analysis.