--- license: cc-by-4.0 task_categories: - image-classification - zero-shot-classification tags: - biology - ecology - wildlife - camera-traps - vision-transformers - clustering - zero-shot-learning - biodiversity - reproducibility - benchmarking - embeddings - dinov3 - dinov2 - bioclip - clip - siglip language: - en pretty_name: HUGO-Bench Paper Reproducibility Data size_categories: - 100K **Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study** > > Hugo Markoff, Stefan Hein Bengtson, Michael Ørsted > > Aalborg University, Denmark ## Dataset Description This repository contains complete experimental results, pre-computed embeddings, and execution logs from our comprehensive benchmarking study evaluating Vision Transformer models for zero-shot clustering of wildlife camera trap images. ### Related Resources - **Source Images**: [AI-EcoNet/HUGO-Bench](https://huggingface.co/datasets/AI-EcoNet/HUGO-Bench) - 139,111 wildlife images - **Code Repository**: Coming soon ## Repository Structure ``` ├── primary_benchmarking/ # Main benchmark results (27,600 configurations) ├── model_comparison/ # Cross-model comparisons ├── dimensionality_reduction/ # UMAP/t-SNE/PCA analysis ├── clustering_supervised/ # Supervised clustering metrics ├── clustering_unsupervised/ # Unsupervised clustering results ├── cluster_count_prediction/ # Optimal cluster count analysis ├── intra_species_variation/ # Within-species cluster analysis │ ├── train-*.parquet # Analysis results │ └── cluster_image_mappings.json # Image-to-cluster assignments ├── scaling_tests/ # Sample size scaling experiments ├── uneven_distribution/ # Class imbalance experiments ├── subsample_definitions/ # Reproducible subsample definitions ├── embeddings_*/ # Pre-computed embeddings (5 models) │ ├── embeddings_dinov3_vith16plus/ # 120K embeddings, 1280-dim │ ├── embeddings_dinov2_vitg14/ # 120K embeddings, 1536-dim │ ├── embeddings_bioclip2_vitl14/ # 120K embeddings, 768-dim │ ├── embeddings_clip_vitl14/ # 120K embeddings, 768-dim │ └── embeddings_siglip_vitb16/ # 120K embeddings, 768-dim ├── extreme_uneven_embeddings/ # Full dataset embeddings (PKL) │ ├── aves_full_dinov3_embeddings.pkl # 74,396 embeddings │ └── mammalia_full_dinov3_embeddings.pkl # 65,484 embeddings └── execution_logs/ # Experiment execution logs ``` ## Quick Start ### Load Primary Benchmark Results ```python from datasets import load_dataset # Load main benchmark results (27,600 configurations) ds = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "primary_benchmarking") print(f"Configurations: {len(ds['train'])}") ``` ### Load Pre-computed Embeddings ```python # Load DINOv3 embeddings (120,000 images) embeddings = load_dataset( "AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "embeddings_dinov3_vith16plus" ) print(f"Embeddings shape: {len(embeddings['train'])} x {len(embeddings['train'][0]['embedding'])}") ``` ### Load Specific Analysis Results ```python # Model comparison results model_comp = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "model_comparison") # Scaling test results scaling = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "scaling_tests") # Intra-species variation analysis intra = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "intra_species_variation") ``` ### Load Cluster Image Mappings The intra-species analysis includes a mapping file showing which images belong to which clusters: ```python from huggingface_hub import hf_hub_download import json # Download mapping file mapping_file = hf_hub_download( "AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "intra_species_variation/cluster_image_mappings.json", repo_type="dataset" ) with open(mapping_file) as f: mappings = json.load(f) # Structure: {species: {run: {cluster: [image_names]}}} print(f"Species analyzed: {list(mappings.keys())}") ``` ### Load Full Dataset Embeddings For the extreme uneven distribution experiments, we provide full dataset embeddings: ```python from huggingface_hub import hf_hub_download import pickle # Download Aves embeddings (74,396 images) pkl_file = hf_hub_download( "AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "extreme_uneven_embeddings/aves_full_dinov3_embeddings.pkl", repo_type="dataset" ) with open(pkl_file, 'rb') as f: data = pickle.load(f) print(f"Embeddings: {data['embeddings'].shape}") # (74396, 1280) print(f"Labels: {len(data['labels'])}") print(f"Paths: {len(data['paths'])}") ``` ## Experimental Setup ### Models Evaluated | Model | Architecture | Embedding Dim | Pre-training | |-------|-------------|---------------|--------------| | DINOv3 | ViT-H/16+ | 1280 | Self-supervised | | DINOv2 | ViT-G/14 | 1536 | Self-supervised | | BioCLIP 2 | ViT-L/14 | 768 | Biology domain | | CLIP | ViT-L/14 | 768 | Contrastive | | SigLIP | ViT-B/16 | 768 | Sigmoid loss | ### Clustering Methods - K-Means, DBSCAN, HDBSCAN, Agglomerative, Spectral - GMM (Gaussian Mixture Models) - With and without dimensionality reduction (UMAP, t-SNE, PCA) ### Evaluation Metrics - **Supervised**: Adjusted Rand Index (ARI), Normalized Mutual Information (NMI), Accuracy, F1 - **Unsupervised**: Silhouette Score, Calinski-Harabasz Index, Davies-Bouldin Index ## Citation If you use this dataset, please cite: ```bibtex @article{markoff2026vision, title={Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study}, author={Markoff, Hugo and Bengtson, Stefan Hein and Ørsted, Michael}, journal={[Journal/Conference]}, year={2026} } ``` ## License This dataset is released under the [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/). ## Contact For questions or issues, please open an issue in this repository or contact the authors.