Datasets:
File size: 7,202 Bytes
ba4507b 125a296 ba4507b 125a296 ba4507b 125a296 efed352 4461d3c efed352 ba4507b 7c99aea ba4507b 125a296 ba4507b 125a296 ba4507b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | ---
license: apache-2.0
task_categories:
- image-segmentation
- object-detection
- robotics
language:
- en
tags:
- robotics
- navigation
- frontiers
- autonomous-systems
- field-robotics
- vision-foundation-models
- outdoor-navigation
- traversability
- exploration
pretty_name: WildOS Frontiers Dataset
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: "**"
---
# WildOS Frontiers Dataset
<div align="center">
<img src="https://leggedrobotics.github.io/wildos/static/images/Teaser-V.svg" alt="WildOS Teaser" width="800"/>
</div>
## Dataset Description
This dataset provides **visual frontier annotations** for outdoor long-range navigation, created for [WildOS: Open-Vocabulary Object Search in the Wild](https://leggedrobotics.github.io/wildos/). The annotations are built on top of images from the [GrandTour Dataset](https://huggingface.co/datasets/leggedrobotics/grand_tour_dataset).
**Visual Frontiers** denote regions in the image that correspond to candidate locations for further exploration — such as the end of a trail, an opening
between trees, or a road turning at a curve. This dataset enables training of models to predict visual frontiers from RGB images, extending navigation reasoning beyond the geometric depth horizon.
## Dataset Structure
```
wildos/
├── annotations/ # Frontier annotations (362 JSON files)
│ └── annotation_00000.json ... annotation_00389.json
├── RGB_frames/ # Raw RGB frames (390 images + metadata)
│ ├── metadata.json # Maps to original GrandTour images
│ └── rgb_00000.png ... rgb_00389.png
├── RGB_rectified/ # Rectified RGB images (390 images)
│ └── rect_00000.png ... rect_00389.png
└── SAM_boundaries/ # SAM-2 boundary masks (390 images)
└── bound_00000.png ... bound_00389.png
```
### File Descriptions
| Folder | Description | Count |
|--------|-------------|-------|
| `annotations/` | JSON files containing frontier bounding box annotations | 362 |
| `RGB_frames/` | Original RGB frames from GrandTour dataset | 390 + 1 metadata |
| `RGB_rectified/` | Rectified (undistorted) RGB images | 390 |
| `SAM_boundaries/` | Binary masks from SAM-2 boundary detection | 390 |
> **Note:** Some images do not have corresponding annotations (362 out of 390 images are annotated). Images without annotations were excluded during quality control. The `SAM_boundaries/` folder contains SAM-2 boundary masks used in an ablation study, where frontiers were defined as the SAM boundary segments within human-annotated bounding boxes.
## Annotation Format
Each annotation file contains a list of frontier detections with the following structure:
```json
[
{
"label": "frontier",
"start": [1326.0, 618.0],
"end": [1352.0, 636.0]
}
]
```
| Field | Description |
|-------|-------------|
| `label` | Frontier label (currently `"frontier"` for all annotations) |
| `start` | Top-left corner `[x, y]` of the bounding box |
| `end` | Bottom-right corner `[x, y]` of the bounding box |
> **Note:** The `label` field exists because we initially experimented with labeling frontiers of varying strengths. In the final dataset, all annotations use the single label `"frontier"`.
## Example Annotations
<div align="center">
<table>
<tr>
<td><img src="https://leggedrobotics.github.io/wildos/static/images/label_examples/rect_00001.png" width="400"/></td>
<td><img src="https://leggedrobotics.github.io/wildos/static/images/label_examples/rect_00024.png" width="400"/></td>
</tr>
<tr>
<td><img src="https://leggedrobotics.github.io/wildos/static/images/label_examples/rect_00037.png" width="400"/></td>
<td><img src="https://leggedrobotics.github.io/wildos/static/images/label_examples/rect_00086.png" width="400"/></td>
</tr>
<tr>
<td><img src="https://leggedrobotics.github.io/wildos/static/images/label_examples/rect_00191.png" width="400"/></td>
<td><img src="https://leggedrobotics.github.io/wildos/static/images/label_examples/rect_00264.png" width="400"/></td>
</tr>
</table>
</div>
*Red regions indicate visual frontiers — candidate locations for further exploration.* More examples can be viewed [here](https://leggedrobotics.github.io/wildos/#frontier-annotations).
## Usage
### Loading Individual Files
```python
import json
from PIL import Image
# Load an annotation
with open("wildos/annotations/annotation_00000.json", "r") as f:
annotations = json.load(f)
# Load corresponding image
image = Image.open("wildos/RGB_rectified/rect_00000.png")
print(f"Image size: {image.size}")
print(f"Number of frontiers: {len(annotations)}")
```
### Visualizing Annotations
Visualize frontier annotations on images:
```python
import os
import json
import cv2
import numpy as np
def visualize_frontiers(image_path, annotation_path, output_path=None):
"""Draw frontier annotations on an image."""
# Load image
img = cv2.imread(image_path)
# Load annotations
with open(annotation_path, "r") as f:
annotations = json.load(f)
# Draw each frontier
for ann in annotations:
x1, y1 = int(ann["start"][0]), int(ann["start"][1])
x2, y2 = int(ann["end"][0]), int(ann["end"][1])
color = (0, 0, 255) # Red in BGR
# Draw semi-transparent rectangle
overlay = img.copy()
cv2.rectangle(overlay, (x1, y1), (x2, y2), color, -1)
cv2.addWeighted(overlay, 0.35, img, 0.65, 0, img)
cv2.rectangle(img, (x1, y1), (x2, y2), color, 2)
if output_path:
cv2.imwrite(output_path, img)
return img
# Example usage
visualize_frontiers(
"wildos/RGB_rectified/rect_00000.png",
"wildos/annotations/annotation_00000.json",
"output_visualization.png"
)
```
### Metadata Mapping
The `metadata.json` file in `RGB_frames/` maps each image index to its source path in the GrandTour dataset:
```python
import json
with open("wildos/RGB_frames/metadata.json", "r") as f:
metadata = json.load(f)
# Find original GrandTour image for a specific frame index
original_path = metadata["0"] # e.g., "release_2024-11-03-07-57-34/hdr_front/hdr_front_01342.png"
print(f"Original GrandTour path: {original_path}")
```
## Related Resources
- **Project Page**: [WildOS: Open-Vocabulary Object Search in the Wild](https://leggedrobotics.github.io/wildos/)
- **Source Dataset**: [GrandTour Dataset](https://huggingface.co/datasets/leggedrobotics/grand_tour_dataset)
## Citation
If you use this dataset in your research, please cite:
```bibtex
@misc{shah2026wildosopenvocabularyobjectsearch,
title={WildOS: Open-Vocabulary Object Search in the Wild},
author={Hardik Shah and Erica Tevere and Deegan Atha and Marcel Kaufmann and Shehryar Khattak and Manthan Patel and Marco Hutter and Jonas Frey and Patrick Spieler},
year={2026},
eprint={2602.19308},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2602.19308},
}
```
## License
This dataset is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
|