Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,4 +6,63 @@ tags:
|
|
| 6 |
- planet
|
| 7 |
- multimodal
|
| 8 |
- retrieval
|
| 9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
- planet
|
| 7 |
- multimodal
|
| 8 |
- retrieval
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# Global Geo-Localization
|
| 12 |
+
|
| 13 |
+
## Dataset Summary
|
| 14 |
+
This dataset is Task 3 of [**MarsRetrieval**](https://github.com/ml-stat-Sustech/MarsRetrieval), a retrieval-centric benchmark for evaluating vision-language models (VLMs) on Mars geospatial discovery.
|
| 15 |
+
Task 3 simulates **planetary-scale discovery** by localizing scientific concepts within the global CTX mosaic, which comprises over **1.4 million** CTX tiles.
|
| 16 |
+
|
| 17 |
+
Ground-truth reference points are compiled from published global scientific catalogues of five Martian landforms:
|
| 18 |
+
|
| 19 |
+
- Alluvial Fans
|
| 20 |
+
- Glacier-Like Forms
|
| 21 |
+
- Landslides
|
| 22 |
+
- Pitted Cones
|
| 23 |
+
- Yardangs
|
| 24 |
+
|
| 25 |
+
## Task Formulation
|
| 26 |
+
|
| 27 |
+
- **Text → Image** retrieval
|
| 28 |
+
- **Image → Image** retrieval
|
| 29 |
+
|
| 30 |
+
The model retrieves top-K candidate tiles from the global index. A retrieved tile is considered correct if its projected center falls within a spatial tolerance radius (default r = 0.5°) of a ground-truth catalogue coordinate.
|
| 31 |
+
|
| 32 |
+
## Metrics
|
| 33 |
+
|
| 34 |
+
Given the extreme sparsity of positives, we report:
|
| 35 |
+
|
| 36 |
+
- **AUPRC** (Area Under Precision–Recall Curve)
|
| 37 |
+
- **Optimal F1@K\*** (best F1 over retrieval depth K)
|
| 38 |
+
|
| 39 |
+
These metrics quantify planetary-scale distribution estimation rather than simple top-K accuracy.
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
## How to Use
|
| 44 |
+
```python
|
| 45 |
+
from datasets import load_dataset
|
| 46 |
+
|
| 47 |
+
# Load the ground-truth catalogs for the 5 landforms
|
| 48 |
+
gt_catalogs = load_dataset("SUSTech/marsretrieval-t3-geolocalization", "ground_truth")
|
| 49 |
+
|
| 50 |
+
# Load image queries used for image-based localization
|
| 51 |
+
queries = load_dataset("SUSTech/marsretrieval-t3-geolocalization", "image_queries")
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
For detailed instructions on the retrieval-centric protocol and official evaluation scripts, please refer to our [Official Dataset Documentation](https://github.com/ml-stat-Sustech/MarsRetrieval/blob/main/docs/DATASET.md).
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
## Citation
|
| 58 |
+
|
| 59 |
+
If you find this useful in your research, please consider citing:
|
| 60 |
+
|
| 61 |
+
```bibtex
|
| 62 |
+
@article{wang2026marsretrieval,
|
| 63 |
+
title={MarsRetrieval: Benchmarking Vision-Language Models for Planetary-Scale Geospatial Retrieval on Mars},
|
| 64 |
+
author={Wang, Shuoyuan and Wang, Yiran and Wei, Hongxin},
|
| 65 |
+
journal={arXiv preprint},
|
| 66 |
+
year={2026}
|
| 67 |
+
}
|
| 68 |
+
```
|