File size: 4,710 Bytes
17cce93
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c030d0e
 
 
 
 
17cce93
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c030d0e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
---
viewer: false
tags:
  - adversarial-robustness
  - image-classification
  - robustness-benchmark
---

# RobustGenBench

A benchmark for evaluating the adversarial robustness of zero-shot image classifiers across six fine-grained / domain-specific datasets and a range of threat models.

A small **stratified sample** (2 images per class) is also available for quick inspection: πŸ‘‰ https://huggingface.co/datasets/legolasflagstaff/RobustGenBench-sample

## Structure

```
caltech101_processed.tar.zst
fgvc-aircraft-2013b_processed.tar.zst
flowers-102_processed.tar.zst
oxford-iiit-pet_processed.tar.zst
stanford_cars_processed.tar.zst
uc-merced-land-use-dataset_processed.tar.zst

class_names/
  <dataset>.json                 ← integer-label β†’ class-name mappings

adversarial/
  common/common_severity3/<dataset>__common_severity3_processed.tar.zst
  random/linf_eps30_random_uniform/<dataset>__random_linf_eps30_random_uniform_processed.tar.zst
  zeroshot_clip_vitb16_laion2b/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
  zeroshot_clip_vith14_laion2b/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
  zeroshot_metaclip_vith14_fullcc2_5b/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
  zeroshot_siglip2_base_patch16_224/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
  zeroshot_siglip2_so400m_patch14_384/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
  zeroshot_siglip2_so400m_patch16_naflex/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
  zeroshot_siglip2_so400m_patch16_naflex_patchify/<threat_model>/<dataset>__<threat_model>_processed.tar.zst
```

Each archive is a `.tar.zst` containing flat-numbered PNGs and a `labels.csv`.

### Clean archives

```
<dataset>_processed.tar.zst
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ 00000.png
β”‚   β”œβ”€β”€ 00001.png
β”‚   β”œβ”€β”€ ...
β”‚   └── labels.csv
β”œβ”€β”€ val/
β”‚   β”œβ”€β”€ 00000.png
β”‚   β”œβ”€β”€ ...
β”‚   └── labels.csv
β”œβ”€β”€ test/
β”‚   β”œβ”€β”€ 00000.png
β”‚   β”œβ”€β”€ ...
β”‚   └── labels.csv
└── metadata.json                ← split counts and number of classes N
```

### Adversarial archives

```
<dataset>__<threat_model>_processed.tar.zst
└── test/
    β”œβ”€β”€ 00000.png
    β”œβ”€β”€ ...
    └── labels.csv
```

Filenames are aligned across all archives: `test/00000.png` in every adversarial archive corresponds to the same source image (and same label) used to generate the perturbation. `labels.csv` provides the `filename,label` mapping with integer class indices; resolve to class names via `class_names/<dataset>.json`.

## Datasets

| Dataset | Classes | Test size |
|---|---|---|
| Caltech101 | 101 | 1000 |
| FGVC-Aircraft 2013b | 100 | 1000 |
| Oxford Flowers 102 | 102 | 1000 |
| Oxford-IIIT Pet | 37 | 1000 |
| Stanford Cars | 196 | 1000 |
| UC Merced Land Use | 21 | 420 |

## Threat models

The `adversarial/` tree is organized by **surrogate model used to craft the attack**, then by **threat model**.

**Untargeted attacks (AutoAttack standard, per surrogate):**
- `linf_eps8_autoattack_standard` β€” L∞, Ξ΅ = 8/255
- `linf_eps30_autoattack_standard` β€” L∞, Ξ΅ = 30/255
- `l2_eps2_autoattack_standard` β€” L2, Ξ΅ = 2
- `l2_eps8_autoattack_standard` β€” L2, Ξ΅ = 8
- `l1_eps75_autoattack_standard` β€” L1, Ξ΅ = 75
- `l1_eps300_autoattack_standard` β€” L1, Ξ΅ = 300

**Surrogate-agnostic baselines:**
- `common/common_severity3` β€” common corruption suite at severity 3
- `random/linf_eps30_random_uniform` β€” random uniform L∞ noise at Ξ΅ = 30/255

## Loading

```python
import tarfile, io, csv
import zstandard as zstd
from PIL import Image
from huggingface_hub import hf_hub_download

archive = hf_hub_download(
    repo_id="legolasflagstaff/RobustGenBench",
    repo_type="dataset",
    filename="uc-merced-land-use-dataset_processed.tar.zst",
)

with open(archive, "rb") as f:
    buf = io.BytesIO(zstd.ZstdDecompressor().stream_reader(f).read())
with tarfile.open(fileobj=buf, mode="r:") as tar:
    images = {}
    for m in tar.getmembers():
        if m.name.startswith("test/") and m.name.endswith(".png"):
            images[m.name] = Image.open(io.BytesIO(tar.extractfile(m).read())).convert("RGB")
    labels_f = tar.extractfile(tar.getmember("test/labels.csv"))
    labels = list(csv.DictReader(io.TextIOWrapper(labels_f)))

print(f"Loaded {len(images)} images, {len(labels)} label rows")
```

## Citation

If you use RobustGenBench in your work, please cite:

```bibtex
@inproceedings{robustgenbench2025,
  title  = {RobustGenBench: ...},
  author = {...},
  year   = {2025},
}
```

## License

[Specify license here β€” e.g. CC-BY-4.0, or per-dataset license inheritance.]