Datasets:
File size: 4,431 Bytes
3893463 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | ---
license: cc-by-4.0
task_categories:
- image-to-image
- image-feature-extraction
tags:
- mars
- crater
- retrieval
- remote-sensing
- planetary-science
- vision-transformer
pretty_name: CraterBench-R
size_categories:
- 10K<n<100K
---
# CraterBench-R
CraterBench-R is an instance-level crater retrieval benchmark built from Mars CTX imagery.
This repository contains the released benchmark images, official split files, relevance metadata, and a minimal evaluation example.
## Summary
- `25,000` crater identities in the benchmark gallery
- `50,000` gallery images with two canonical context crops per crater
- `1,000` held-out query crater identities
- `5,000` manually verified query images with five views per query crater
- official `train` and `test` split manifests with relative paths only
## Download
```bash
# Download the repository
pip install huggingface_hub
huggingface-cli download jfang/CraterBench-R --repo-type dataset --local-dir CraterBench-R
cd CraterBench-R
# Extract images
unzip images.zip
```
After extraction the directory should contain `images/gallery/` (50,000 JPEGs) and `images/query/` (5,000 JPEGs).
## Repository Layout
- `images.zip`: all benchmark images (gallery + query) in a single archive
- `splits/test.json`: official benchmark split with full gallery plus query set
- `splits/train.json`: train gallery with the full test relevance closure removed
- `metadata/query_relevance.json`: raw co-visible crater IDs and gallery-filtered relevance
- `metadata/stats.json`: release summary statistics
- `metadata/source/retrieval_ground_truth_raw.csv`: raw query relevance CSV for reference
- `examples/eval_timm_global.py`: minimal global-descriptor example for any `timm` image model
- `requirements.txt`: lightweight requirements for the example script
## Split Semantics
`splits/test.json` is the official benchmark split and includes both:
- the full gallery
- the query set evaluated against that gallery
The `ground_truth` field maps each query crater ID to gallery-present relevant crater IDs.
The raw query co-visibility information is preserved separately in `metadata/query_relevance.json`:
- `co_visible_ids_all`: all raw co-visible crater IDs from the source annotation
- `relevant_gallery_ids`: the subset that is present in the released gallery
`splits/train.json` is intended for supervised or metric-learning experiments. It excludes the full test relevance closure, not just the 1,000 direct query crater IDs.
## Official Counts
- test gallery: `25,000` crater IDs / `50,000` images
- test queries: `1,000` crater IDs / `5,000` images
- train gallery: `23,941` crater IDs / `47,882` images
- raw multi-ID query crater IDs: `428`
- gallery-present multi-ID query crater IDs: `59`
## Quick Start
```bash
pip install -r requirements.txt
unzip images.zip # if not already extracted
python examples/eval_timm_global.py \
--data-root . \
--split test \
--model vit_small_patch16_224.dino \
--pool cls \
--batch-size 64 \
--device cuda
```
The example script:
- loads `splits/test.json`
- creates a pretrained `timm` model
- extracts one feature vector per image
- performs cosine retrieval against the released gallery
- reports Recall@1/5/10, mAP, and MRR
It is intentionally simple and meant as a working baseline rather than the fastest possible evaluator.
## Manifest Format
Each split JSON has the form:
```json
{
"split_name": "train or test",
"version": "release_v1",
"gallery_images": [
{
"image_id": "02-1-002611_2x",
"path": "images/gallery/02-1-002611_2x.jpg",
"crater_id": "02-1-002611",
"view_type": "2x"
}
],
"query_images": [
{
"image_id": "02-1-002927_view_1",
"query_id": "02-1-002927",
"path": "images/query/02-1-002927_view_1.jpg",
"crater_id": "02-1-002927",
"view": 1,
"manual_verified": true
}
],
"ground_truth": {
"02-1-002927": ["02-1-002927"]
}
}
```
## Validation
The repository includes one small validation utility:
- `scripts/validate_release.py`: checks split consistency, relative paths, and train/test separation
To validate the package locally:
```bash
python scripts/validate_release.py
```
## Notes
- Image paths in manifests are relative and portable.
- `splits/test.json` is the official benchmark entrypoint.
- `splits/train.json` is intended for training or fine-tuning experiments.
|