license: cc-by-4.0
task_categories:
- image-classification
- zero-shot-classification
tags:
- biology
- ecology
- wildlife
- camera-traps
- vision-transformers
- clustering
- zero-shot-learning
- biodiversity
- reproducibility
- benchmarking
- embeddings
- dinov3
- dinov2
- bioclip
- clip
- siglip
language:
- en
pretty_name: HUGO-Bench Paper Reproducibility Data
size_categories:
- 100K<n<1M
source_datasets:
- AI-EcoNet/HUGO-Bench
configs:
- config_name: primary_benchmarking
data_files: primary_benchmarking/train-*.parquet
default: true
- config_name: model_comparison
data_files: model_comparison/train-*.parquet
- config_name: dimensionality_reduction
data_files: dimensionality_reduction/train-*.parquet
- config_name: clustering_supervised
data_files: clustering_supervised/train-*.parquet
- config_name: clustering_unsupervised
data_files: clustering_unsupervised/train-*.parquet
- config_name: cluster_count_prediction
data_files: cluster_count_prediction/train-*.parquet
- config_name: intra_species_variation
data_files: intra_species_variation/train-*.parquet
- config_name: scaling_tests
data_files: scaling_tests/train-*.parquet
- config_name: uneven_distribution
data_files: uneven_distribution/train-*.parquet
- config_name: subsample_definitions
data_files: subsample_definitions/train-*.parquet
- config_name: embeddings_dinov3_vith16plus
data_files: embeddings_dinov3_vith16plus/train-*.parquet
- config_name: embeddings_dinov2_vitg14
data_files: embeddings_dinov2_vitg14/train-*.parquet
- config_name: embeddings_bioclip2_vitl14
data_files: embeddings_bioclip2_vitl14/train-*.parquet
- config_name: embeddings_clip_vitl14
data_files: embeddings_clip_vitl14/train-*.parquet
- config_name: embeddings_siglip_vitb16
data_files: embeddings_siglip_vitb16/train-*.parquet
HUGO-Bench Paper Reproducibility
Supplementary data and reproducibility materials for the paper:
Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study
Hugo Markoff, Stefan Hein Bengtson, Michael Ørsted
Aalborg University, Denmark
Dataset Description
This repository contains complete experimental results, pre-computed embeddings, and execution logs from our comprehensive benchmarking study evaluating Vision Transformer models for zero-shot clustering of wildlife camera trap images.
Related Resources
- Source Images: AI-EcoNet/HUGO-Bench - 139,111 wildlife images
- Code Repository: Coming soon
Repository Structure
├── primary_benchmarking/ # Main benchmark results (27,600 configurations)
├── model_comparison/ # Cross-model comparisons
├── dimensionality_reduction/ # UMAP/t-SNE/PCA analysis
├── clustering_supervised/ # Supervised clustering metrics
├── clustering_unsupervised/ # Unsupervised clustering results
├── cluster_count_prediction/ # Optimal cluster count analysis
├── intra_species_variation/ # Within-species cluster analysis
│ ├── train-*.parquet # Analysis results
│ └── cluster_image_mappings.json # Image-to-cluster assignments
├── scaling_tests/ # Sample size scaling experiments
├── uneven_distribution/ # Class imbalance experiments
├── subsample_definitions/ # Reproducible subsample definitions
├── embeddings_*/ # Pre-computed embeddings (5 models)
│ ├── embeddings_dinov3_vith16plus/ # 120K embeddings, 1280-dim
│ ├── embeddings_dinov2_vitg14/ # 120K embeddings, 1536-dim
│ ├── embeddings_bioclip2_vitl14/ # 120K embeddings, 768-dim
│ ├── embeddings_clip_vitl14/ # 120K embeddings, 768-dim
│ └── embeddings_siglip_vitb16/ # 120K embeddings, 768-dim
├── extreme_uneven_embeddings/ # Full dataset embeddings (PKL)
│ ├── aves_full_dinov3_embeddings.pkl # 74,396 embeddings
│ └── mammalia_full_dinov3_embeddings.pkl # 65,484 embeddings
└── execution_logs/ # Experiment execution logs
Quick Start
Load Primary Benchmark Results
from datasets import load_dataset
# Load main benchmark results (27,600 configurations)
ds = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "primary_benchmarking")
print(f"Configurations: {len(ds['train'])}")
Load Pre-computed Embeddings
# Load DINOv3 embeddings (120,000 images)
embeddings = load_dataset(
"AI-EcoNet/HUGO-Bench-Paper-Reproducibility",
"embeddings_dinov3_vith16plus"
)
print(f"Embeddings shape: {len(embeddings['train'])} x {len(embeddings['train'][0]['embedding'])}")
Load Specific Analysis Results
# Model comparison results
model_comp = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "model_comparison")
# Scaling test results
scaling = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "scaling_tests")
# Intra-species variation analysis
intra = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "intra_species_variation")
Load Cluster Image Mappings
The intra-species analysis includes a mapping file showing which images belong to which clusters:
from huggingface_hub import hf_hub_download
import json
# Download mapping file
mapping_file = hf_hub_download(
"AI-EcoNet/HUGO-Bench-Paper-Reproducibility",
"intra_species_variation/cluster_image_mappings.json",
repo_type="dataset"
)
with open(mapping_file) as f:
mappings = json.load(f)
# Structure: {species: {run: {cluster: [image_names]}}}
print(f"Species analyzed: {list(mappings.keys())}")
Load Full Dataset Embeddings
For the extreme uneven distribution experiments, we provide full dataset embeddings:
from huggingface_hub import hf_hub_download
import pickle
# Download Aves embeddings (74,396 images)
pkl_file = hf_hub_download(
"AI-EcoNet/HUGO-Bench-Paper-Reproducibility",
"extreme_uneven_embeddings/aves_full_dinov3_embeddings.pkl",
repo_type="dataset"
)
with open(pkl_file, 'rb') as f:
data = pickle.load(f)
print(f"Embeddings: {data['embeddings'].shape}") # (74396, 1280)
print(f"Labels: {len(data['labels'])}")
print(f"Paths: {len(data['paths'])}")
Experimental Setup
Models Evaluated
| Model | Architecture | Embedding Dim | Pre-training |
|---|---|---|---|
| DINOv3 | ViT-H/16+ | 1280 | Self-supervised |
| DINOv2 | ViT-G/14 | 1536 | Self-supervised |
| BioCLIP 2 | ViT-L/14 | 768 | Biology domain |
| CLIP | ViT-L/14 | 768 | Contrastive |
| SigLIP | ViT-B/16 | 768 | Sigmoid loss |
Clustering Methods
- K-Means, DBSCAN, HDBSCAN, Agglomerative, Spectral
- GMM (Gaussian Mixture Models)
- With and without dimensionality reduction (UMAP, t-SNE, PCA)
Evaluation Metrics
- Supervised: Adjusted Rand Index (ARI), Normalized Mutual Information (NMI), Accuracy, F1
- Unsupervised: Silhouette Score, Calinski-Harabasz Index, Davies-Bouldin Index
Citation
If you use this dataset, please cite:
@article{markoff2026vision,
title={Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study},
author={Markoff, Hugo and Bengtson, Stefan Hein and Ørsted, Michael},
journal={[Journal/Conference]},
year={2026}
}
License
This dataset is released under the CC-BY-4.0 License.
Contact
For questions or issues, please open an issue in this repository or contact the authors.