Xecades's picture
Update README.md
6b64c76 verified
---
license: mit
task_categories:
- depth-estimation
- keypoint-detection
- image-feature-extraction
pretty_name: AerialExtreMatch Benchmark
viewer: false
tags:
- image
---
# AerialExtreMatch — Benchmark Dataset
[Code](https://github.com/Xecades/AerialExtreMatch) | [Project Page](https://xecades.github.io/AerialExtreMatch/) | Paper (WIP)
This repo contains the **benchmark** set for our paper *AerialExtreMatch: A Benchmark for Extreme-View Image Matching and Localization*. 32 difficulty levels are included. We also provide [**train**](https://huggingface.co/datasets/Xecades/AerialExtreMatch-Train) and [**localization**](https://huggingface.co/datasets/Xecades/AerialExtreMatch-Localization) datasets.
## Usage
Simply clone this repository and unzip the dataset files.
```bash
git clone git@hf.co:datasets/Xecades/AerialExtreMatch-Benchmark
cd AerialExtreMatch-Benchmark
unzip "*.zip"
rm -rf *.zip
rm -rf .git
```
## Dataset Structure
After unpacking each .zip file:
<pre>
.
└── class_[id] <i>(class_0~class_31)</i>
   ├── class_[id].npy
   ├── depth: *.exr
   └── rgb: *.jpg
</pre>
- Keys of `class_[id].npy` files: `['poses', 'intrinsics', 'depth', 'rgb', 'overlap', 'pitch', 'scale', 'pair']`.
## Classification Metric
Note that the actual folders are 0-indexed, but the table below is 1-indexed for consistency with the paper, i.e. level 5 corresponds to `class_4`.
![Classification Metric](metric.png)