Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type 'ground_truth' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1890, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 758, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 799, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'ground_truth' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1911, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in finalize
self._build_writer(self.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 799, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'ground_truth' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1922, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
split_name string | version string | gallery_images list | query_images list | ground_truth dict | holdout_policy string |
|---|---|---|---|---|---|
train | release_v1 | [{"image_id":"02-1-002611_2x","path":"images/gallery/02-1-002611_2x.jpg","crater_id":"02-1-002611","(...TRUNCATED) | [] | {} | exclude_test_relevance_closure |
CraterBench-R
CraterBench-R is an instance-level crater retrieval benchmark built from Mars CTX imagery.
This repository contains the released benchmark images, official split files, relevance metadata, and a minimal evaluation example.
Summary
25,000crater identities in the benchmark gallery50,000gallery images with two canonical context crops per crater1,000held-out query crater identities5,000manually verified query images with five views per query crater- official
trainandtestsplit manifests with relative paths only
Download
# Download the repository
pip install huggingface_hub
huggingface-cli download jfang/CraterBench-R --repo-type dataset --local-dir CraterBench-R
cd CraterBench-R
# Extract images
unzip images.zip
After extraction the directory should contain images/gallery/ (50,000 JPEGs) and images/query/ (5,000 JPEGs).
Repository Layout
images.zip: all benchmark images (gallery + query) in a single archivesplits/test.json: official benchmark split with full gallery plus query setsplits/train.json: train gallery with the full test relevance closure removedmetadata/query_relevance.json: raw co-visible crater IDs and gallery-filtered relevancemetadata/stats.json: release summary statisticsmetadata/source/retrieval_ground_truth_raw.csv: raw query relevance CSV for referenceexamples/eval_timm_global.py: minimal global-descriptor example for anytimmimage modelrequirements.txt: lightweight requirements for the example script
Split Semantics
splits/test.json is the official benchmark split and includes both:
- the full gallery
- the query set evaluated against that gallery
The ground_truth field maps each query crater ID to gallery-present relevant crater IDs.
The raw query co-visibility information is preserved separately in metadata/query_relevance.json:
co_visible_ids_all: all raw co-visible crater IDs from the source annotationrelevant_gallery_ids: the subset that is present in the released gallery
splits/train.json is intended for supervised or metric-learning experiments. It excludes the full test relevance closure, not just the 1,000 direct query crater IDs.
Official Counts
- test gallery:
25,000crater IDs /50,000images - test queries:
1,000crater IDs /5,000images - train gallery:
23,941crater IDs /47,882images - raw multi-ID query crater IDs:
428 - gallery-present multi-ID query crater IDs:
59
Quick Start
pip install -r requirements.txt
unzip images.zip # if not already extracted
python examples/eval_timm_global.py \
--data-root . \
--split test \
--model vit_small_patch16_224.dino \
--pool cls \
--batch-size 64 \
--device cuda
The example script:
- loads
splits/test.json - creates a pretrained
timmmodel - extracts one feature vector per image
- performs cosine retrieval against the released gallery
- reports Recall@1/5/10, mAP, and MRR
It is intentionally simple and meant as a working baseline rather than the fastest possible evaluator.
Manifest Format
Each split JSON has the form:
{
"split_name": "train or test",
"version": "release_v1",
"gallery_images": [
{
"image_id": "02-1-002611_2x",
"path": "images/gallery/02-1-002611_2x.jpg",
"crater_id": "02-1-002611",
"view_type": "2x"
}
],
"query_images": [
{
"image_id": "02-1-002927_view_1",
"query_id": "02-1-002927",
"path": "images/query/02-1-002927_view_1.jpg",
"crater_id": "02-1-002927",
"view": 1,
"manual_verified": true
}
],
"ground_truth": {
"02-1-002927": ["02-1-002927"]
}
}
Validation
The repository includes one small validation utility:
scripts/validate_release.py: checks split consistency, relative paths, and train/test separation
To validate the package locally:
python scripts/validate_release.py
Notes
- Image paths in manifests are relative and portable.
splits/test.jsonis the official benchmark entrypoint.splits/train.jsonis intended for training or fine-tuning experiments.
- Downloads last month
- 25