File size: 7,934 Bytes
aaf9acb
7d250bd
 
07c8929
 
7d250bd
07c8929
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d250bd
07c8929
7d250bd
 
07c8929
7d250bd
07c8929
937da5e
41136bd
 
 
 
 
 
 
07c8929
41136bd
07c8929
41136bd
 
 
f8e4ec2
41136bd
07c8929
41136bd
231c405
41136bd
39e8e73
41136bd
 
 
 
 
 
 
 
 
 
 
aaf9acb
7d250bd
 
 
 
 
 
41136bd
7d250bd
41136bd
7d250bd
 
 
 
41136bd
7d250bd
41136bd
7d250bd
41136bd
 
7d250bd
41136bd
7d250bd
41136bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d250bd
41136bd
 
 
 
 
 
7d250bd
41136bd
 
 
7d250bd
41136bd
 
 
 
 
 
 
 
 
 
7d250bd
 
41136bd
7d250bd
41136bd
 
 
7d250bd
41136bd
 
7d250bd
41136bd
 
 
7d250bd
41136bd
7d250bd
41136bd
7d250bd
 
41136bd
7d250bd
 
41136bd
 
 
 
 
 
7d250bd
41136bd
 
7d250bd
41136bd
 
7d250bd
 
41136bd
7d250bd
41136bd
7d250bd
 
41136bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d250bd
41136bd
7d250bd
41136bd
7d250bd
41136bd
 
 
 
 
 
 
7d250bd
41136bd
7d250bd
41136bd
 
 
7d250bd
41136bd
7d250bd
41136bd
 
7d250bd
 
 
41136bd
7d250bd
 
41136bd
7d250bd
41136bd
 
 
7d250bd
 
 
 
 
41136bd
7d250bd
 
 
41136bd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
---

license: cc-by-4.0
task_categories:
- image-classification
- zero-shot-classification
tags:
- biology
- ecology
- wildlife
- camera-traps
- vision-transformers
- clustering
- zero-shot-learning
- biodiversity
- reproducibility
- benchmarking
- embeddings
- dinov3
- dinov2
- bioclip
- clip
- siglip
language:
- en
pretty_name: HUGO-Bench Paper Reproducibility Data
size_categories:
- 100K<n<1M
source_datasets:
- AI-EcoNet/HUGO-Bench
configs:
- config_name: primary_benchmarking
  data_files: primary_benchmarking/train-*.parquet
  default: true
- config_name: model_comparison
  data_files: model_comparison/train-*.parquet
- config_name: dimensionality_reduction
  data_files: dimensionality_reduction/train-*.parquet
- config_name: clustering_supervised
  data_files: clustering_supervised/train-*.parquet
- config_name: clustering_unsupervised
  data_files: clustering_unsupervised/train-*.parquet
- config_name: cluster_count_prediction
  data_files: cluster_count_prediction/train-*.parquet
- config_name: intra_species_variation
  data_files: intra_species_variation/train-*.parquet
- config_name: scaling_tests
  data_files: scaling_tests/train-*.parquet
- config_name: uneven_distribution
  data_files: uneven_distribution/train-*.parquet
- config_name: subsample_definitions
  data_files: subsample_definitions/train-*.parquet
- config_name: embeddings_dinov3_vith16plus
  data_files: embeddings_dinov3_vith16plus/train-*.parquet
- config_name: embeddings_dinov2_vitg14
  data_files: embeddings_dinov2_vitg14/train-*.parquet
- config_name: embeddings_bioclip2_vitl14
  data_files: embeddings_bioclip2_vitl14/train-*.parquet
- config_name: embeddings_clip_vitl14
  data_files: embeddings_clip_vitl14/train-*.parquet
- config_name: embeddings_siglip_vitb16
  data_files: embeddings_siglip_vitb16/train-*.parquet
---


# HUGO-Bench Paper Reproducibility

**Supplementary data and reproducibility materials for the paper:**

> **Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study**
>

> Hugo Markoff, Stefan Hein Bengtson, Michael Ørsted
>

> Aalborg University, Denmark

## Dataset Description

This repository contains complete experimental results, pre-computed embeddings, and execution logs from our comprehensive benchmarking study evaluating Vision Transformer models for zero-shot clustering of wildlife camera trap images.

### Related Resources

- **Source Images**: [AI-EcoNet/HUGO-Bench](https://huggingface.co/datasets/AI-EcoNet/HUGO-Bench) - 139,111 wildlife images
- **Code Repository**: Coming soon

## Repository Structure

```

├── primary_benchmarking/          # Main benchmark results (27,600 configurations)

├── model_comparison/              # Cross-model comparisons

├── dimensionality_reduction/      # UMAP/t-SNE/PCA analysis

├── clustering_supervised/         # Supervised clustering metrics

├── clustering_unsupervised/       # Unsupervised clustering results

├── cluster_count_prediction/      # Optimal cluster count analysis

├── intra_species_variation/       # Within-species cluster analysis

│   ├── train-*.parquet           # Analysis results

│   └── cluster_image_mappings.json  # Image-to-cluster assignments

├── scaling_tests/                 # Sample size scaling experiments

├── uneven_distribution/           # Class imbalance experiments

├── subsample_definitions/         # Reproducible subsample definitions

├── embeddings_*/                  # Pre-computed embeddings (5 models)

│   ├── embeddings_dinov3_vith16plus/  # 120K embeddings, 1280-dim

│   ├── embeddings_dinov2_vitg14/      # 120K embeddings, 1536-dim

│   ├── embeddings_bioclip2_vitl14/    # 120K embeddings, 768-dim

│   ├── embeddings_clip_vitl14/        # 120K embeddings, 768-dim

│   └── embeddings_siglip_vitb16/      # 120K embeddings, 768-dim

├── extreme_uneven_embeddings/     # Full dataset embeddings (PKL)

│   ├── aves_full_dinov3_embeddings.pkl      # 74,396 embeddings

│   └── mammalia_full_dinov3_embeddings.pkl  # 65,484 embeddings

└── execution_logs/                # Experiment execution logs

```

## Quick Start

### Load Primary Benchmark Results

```python

from datasets import load_dataset



# Load main benchmark results (27,600 configurations)

ds = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "primary_benchmarking")

print(f"Configurations: {len(ds['train'])}")

```

### Load Pre-computed Embeddings

```python

# Load DINOv3 embeddings (120,000 images)

embeddings = load_dataset(

    "AI-EcoNet/HUGO-Bench-Paper-Reproducibility", 

    "embeddings_dinov3_vith16plus"

)

print(f"Embeddings shape: {len(embeddings['train'])} x {len(embeddings['train'][0]['embedding'])}")

```

### Load Specific Analysis Results

```python

# Model comparison results

model_comp = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "model_comparison")



# Scaling test results

scaling = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "scaling_tests")



# Intra-species variation analysis

intra = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "intra_species_variation")

```

### Load Cluster Image Mappings

The intra-species analysis includes a mapping file showing which images belong to which clusters:

```python

from huggingface_hub import hf_hub_download

import json



# Download mapping file

mapping_file = hf_hub_download(

    "AI-EcoNet/HUGO-Bench-Paper-Reproducibility",

    "intra_species_variation/cluster_image_mappings.json",

    repo_type="dataset"

)



with open(mapping_file) as f:

    mappings = json.load(f)



# Structure: {species: {run: {cluster: [image_names]}}}

print(f"Species analyzed: {list(mappings.keys())}")

```

### Load Full Dataset Embeddings

For the extreme uneven distribution experiments, we provide full dataset embeddings:

```python

from huggingface_hub import hf_hub_download

import pickle



# Download Aves embeddings (74,396 images)

pkl_file = hf_hub_download(

    "AI-EcoNet/HUGO-Bench-Paper-Reproducibility",

    "extreme_uneven_embeddings/aves_full_dinov3_embeddings.pkl",

    repo_type="dataset"

)



with open(pkl_file, 'rb') as f:

    data = pickle.load(f)



print(f"Embeddings: {data['embeddings'].shape}")  # (74396, 1280)

print(f"Labels: {len(data['labels'])}")

print(f"Paths: {len(data['paths'])}")

```

## Experimental Setup

### Models Evaluated

| Model | Architecture | Embedding Dim | Pre-training |
|-------|-------------|---------------|--------------|
| DINOv3 | ViT-H/16+ | 1280 | Self-supervised |
| DINOv2 | ViT-G/14 | 1536 | Self-supervised |
| BioCLIP 2 | ViT-L/14 | 768 | Biology domain |
| CLIP | ViT-L/14 | 768 | Contrastive |
| SigLIP | ViT-B/16 | 768 | Sigmoid loss |

### Clustering Methods

- K-Means, DBSCAN, HDBSCAN, Agglomerative, Spectral
- GMM (Gaussian Mixture Models)
- With and without dimensionality reduction (UMAP, t-SNE, PCA)

### Evaluation Metrics

- **Supervised**: Adjusted Rand Index (ARI), Normalized Mutual Information (NMI), Accuracy, F1
- **Unsupervised**: Silhouette Score, Calinski-Harabasz Index, Davies-Bouldin Index

## Citation

If you use this dataset, please cite:

```bibtex

@article{markoff2026vision,

  title={Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study},

  author={Markoff, Hugo and Bengtson, Stefan Hein and Ørsted, Michael},

  journal={[Journal/Conference]},

  year={2026}

}

```

## License

This dataset is released under the [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/).

## Contact

For questions or issues, please open an issue in this repository or contact the authors.