viewer: false
tags:
- adversarial-robustness
- image-classification
- robustness-benchmark
RobustGenBench
A benchmark for evaluating the adversarial robustness of zero-shot image classifiers across six fine-grained / domain-specific datasets and a range of threat models.
A small stratified sample (2 images per class) is also available for quick inspection: π https://huggingface.co/datasets/legolasflagstaff/RobustGenBench-sample
Structure
caltech101_processed.tar.zst
fgvc-aircraft-2013b_processed.tar.zst
flowers-102_processed.tar.zst
oxford-iiit-pet_processed.tar.zst
stanford_cars_processed.tar.zst
uc-merced-land-use-dataset_processed.tar.zst
class_names/
<dataset>.json β integer-label β class-name mappings
adversarial/
common/common_severity3/<dataset>__common_severity3_processed.tar.zst
random/linf_eps30_random_uniform/<dataset>__random_linf_eps30_random_uniform_processed.tar.zst
zeroshot_clip_vitb16_laion2b/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
zeroshot_clip_vith14_laion2b/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
zeroshot_metaclip_vith14_fullcc2_5b/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
zeroshot_siglip2_base_patch16_224/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
zeroshot_siglip2_so400m_patch14_384/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
zeroshot_siglip2_so400m_patch16_naflex/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
zeroshot_siglip2_so400m_patch16_naflex_patchify/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
Each archive is a .tar.zst containing flat-numbered PNGs and a labels.csv.
Clean archives
<dataset>_processed.tar.zst
βββ train/
β βββ 00000.png
β βββ 00001.png
β βββ ...
β βββ labels.csv
βββ val/
β βββ 00000.png
β βββ ...
β βββ labels.csv
βββ test/
β βββ 00000.png
β βββ ...
β βββ labels.csv
βββ metadata.json β split counts and number of classes N
Adversarial archives
<dataset>__<threat_model>_processed.tar.zst
βββ test/
βββ 00000.png
βββ ...
βββ labels.csv
Filenames are aligned across all archives: test/00000.png in every adversarial archive corresponds to the same source image (and same label) used to generate the perturbation. labels.csv provides the filename,label mapping with integer class indices; resolve to class names via class_names/<dataset>.json.
Datasets
| Dataset | Classes | Test size |
|---|---|---|
| Caltech101 | 101 | 1000 |
| FGVC-Aircraft 2013b | 100 | 1000 |
| Oxford Flowers 102 | 102 | 1000 |
| Oxford-IIIT Pet | 37 | 1000 |
| Stanford Cars | 196 | 1000 |
| UC Merced Land Use | 21 | 420 |
Threat models
The adversarial/ tree is organized by surrogate model used to craft the attack, then by threat model.
Untargeted attacks (AutoAttack standard, per surrogate):
linf_eps8_autoattack_standardβ Lβ, Ξ΅ = 8/255linf_eps30_autoattack_standardβ Lβ, Ξ΅ = 30/255l2_eps2_autoattack_standardβ L2, Ξ΅ = 2l2_eps8_autoattack_standardβ L2, Ξ΅ = 8l1_eps75_autoattack_standardβ L1, Ξ΅ = 75l1_eps300_autoattack_standardβ L1, Ξ΅ = 300
Surrogate-agnostic baselines:
common/common_severity3β common corruption suite at severity 3random/linf_eps30_random_uniformβ random uniform Lβ noise at Ξ΅ = 30/255
Loading
import tarfile, io, csv
import zstandard as zstd
from PIL import Image
from huggingface_hub import hf_hub_download
archive = hf_hub_download(
repo_id="legolasflagstaff/RobustGenBench",
repo_type="dataset",
filename="uc-merced-land-use-dataset_processed.tar.zst",
)
with open(archive, "rb") as f:
buf = io.BytesIO(zstd.ZstdDecompressor().stream_reader(f).read())
with tarfile.open(fileobj=buf, mode="r:") as tar:
images = {}
for m in tar.getmembers():
if m.name.startswith("test/") and m.name.endswith(".png"):
images[m.name] = Image.open(io.BytesIO(tar.extractfile(m).read())).convert("RGB")
labels_f = tar.extractfile(tar.getmember("test/labels.csv"))
labels = list(csv.DictReader(io.TextIOWrapper(labels_f)))
print(f"Loaded {len(images)} images, {len(labels)} label rows")
Citation
If you use RobustGenBench in your work, please cite:
@inproceedings{robustgenbench2025,
title = {RobustGenBench: ...},
author = {...},
year = {2025},
}
License
[Specify license here β e.g. CC-BY-4.0, or per-dataset license inheritance.]