File size: 7,097 Bytes
022a454 f722e81 022a454 b98c973 022a454 5e68154 022a454 f722e81 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 |
---
license: mit
tags:
- machine-unlearning
- cifar10
- computer-vision
- robustness
- benchmarks
---
<p align="center">
<img src="https://raw.githubusercontent.com/easydub/EasyDUB-code/main/assets/easydub.png" width="200" alt="EasyDUB mascot">
</p>
## EasyDUB Dataset
### Easy **D**ata **U**nlearning **B**ench
Precomputed CIFAR-10 data for KLOM (KL-divergence of Margins) evaluation of data-unlearning methods.
This dataset contains:
- **200 pretrain models**: ResNet9 models trained on the full CIFAR-10 training set (50,000 samples).
- **200 oracle models per forget set**: ResNet9 models retrained on the retain set (train minus forget) for each of 10 forget sets.
- **Logits and margins**: Precomputed logits and margins for all models on train/val/forget/retain splits.
All models are checkpointed at epoch 23 (out of 24 total training epochs).
### Directory structure
The on-disk layout is:
```text
EasyDUB-dataset/
├── models/
│ └── cifar10/
│ ├── pretrain/
│ │ └── resnet9/
│ │ └── id_X_epoch_23.pt # 200 models (X = 0–199)
│ └── oracle/
│ └── forget_Z/
│ └── resnet9/
│ └── id_X_epoch_23.pt # 200 models per forget set
│
├── logits/
│ └── cifar10/
│ ├── pretrain/
│ │ ├── retain/
│ │ │ └── resnet9/
│ │ │ └── id_X_epoch_23.npy # Full train set logits
│ │ ├── val/
│ │ │ └── resnet9/
│ │ │ └── id_X_epoch_23.npy # Validation logits
│ │ └── forget_Z/
│ │ └── resnet9/
│ │ └── id_X_epoch_23.npy # Forget-set logits
│ └── oracle/
│ └── forget_Z/
│ ├── retain/
│ │ └── resnet9/
│ │ └── id_X_epoch_23.npy # Retain logits
│ ├── forget/
│ │ └── resnet9/
│ │ └── id_X_epoch_23.npy # Forget logits
│ └── val/
│ └── resnet9/
│ └── id_X_epoch_23.npy # Validation logits
│
├── margins/
│ └── cifar10/
│ └── [same structure as logits/]
│
└── forget_sets/
└── cifar10/
└── forget_set_Z.npy # Indices into CIFAR-10 train set
```
### File naming
- **Models**: `id_{MODEL_ID}_epoch_{EPOCH}.pt` (e.g. `id_42_epoch_23.pt`)
- **Logits / margins**: `id_{MODEL_ID}_epoch_{EPOCH}.npy`
- **Forget sets**: `forget_set_{SET_ID}.npy` (e.g. `forget_set_1.npy`)
### Shapes and dtypes
- **Logits**: `(n_samples, 10)` NumPy arrays of `float32` — raw model outputs for the 10 CIFAR-10 classes.
- **Margins**: `(n_samples,)` NumPy arrays of `float32` — scalar margins (see formula below).
- **Forget sets**: `(n_forget_samples,)` NumPy arrays of integer indices into the CIFAR-10 training set in `[0, 49_999]`.
Typical sizes:
- Train set: 50_000 samples
- Validation set: 10_000 samples
- Forget sets: 10–1000 samples (varies by set)
### Margin definition
For each sample with logits `logits` and true label `true_label`:
```python
import torch
def compute_margin(logits: torch.Tensor, true_label: int) -> torch.Tensor:
logit_other = logits.clone()
logit_other[true_label] = -torch.inf
return logits[true_label] - logit_other.logsumexp(dim=-1)
```
Higher margins indicate higher confidence in the correct class relative to all others (via log-sum-exp).
### Forget sets
The dataset includes 10 CIFAR-10 forget sets:
- **Forget set 1**: 10 random samples
- **Forget set 2**: 100 random samples
- **Forget set 3**: 1_000 random samples
- **Forget set 4**: 10 samples with highest projection onto the 1st principal component
- **Forget set 5**: 100 samples with highest projection onto the 1st principal component
- **Forget set 6**: 250 samples with highest + 250 with lowest projection onto the 1st principal component
- **Forget set 7**: 10 samples with highest projection onto the 2nd principal component
- **Forget set 8**: 100 samples with highest projection onto the 2nd principal component
- **Forget set 9**: 250 samples with highest + 250 with lowest projection onto the 2nd principal component
- **Forget set 10**: 100 samples closest in CLIP image space to a reference cassowary image
Each `forget_set_Z.npy` is a 1D array of training indices.
### Quick start
The companion [EasyDUB-code](https://github.com/easydub/EasyDUB-code) repository provides utilities and unlearning methods on top of this dataset.
Here is a minimal example using only NumPy and PyTorch:
```python
import numpy as np
import torch
root = "EasyDUB-dataset"
# Load margins for a single pretrain model on the validation set
margins = np.load(f"{root}/margins/cifar10/pretrain/val/resnet9/id_0_epoch_23.npy")
# Load oracle margins for the same model index and forget set (example: forget_set_1)
oracle_margins = np.load(
f"{root}/margins/cifar10/oracle/forget_1/val/resnet9/id_0_epoch_23.npy"
)
print(margins.shape, oracle_margins.shape)
```
For a higher-level end-to-end demo (including unlearning methods and KLOM computation), see the [EasyDUB-code](https://github.com/easydub/EasyDUB-code) GitHub repository. In particular, `strong_test.py` in `EasyDUB-code` runs a reproducible noisy-SGD unlearning experiment comparing:
- `KLOM(pretrain, oracle)`
- `KLOM(noisy_descent, oracle)`
### Training procedure (summary)
All pretrain and oracle models share the same training setup:
- Optimizer: SGD with momentum
- Learning rate: 0.4 (triangular schedule peaking at epoch 5)
- Momentum: 0.9
- Weight decay: 5e-4
- Epochs: 24 total, checkpoint used here is epoch 23
- Mixed precision: enabled (FP16)
- Label smoothing: 0.0
Pretrain models are trained on the full CIFAR-10 training set. Oracle models are trained on the retain set (training set minus the corresponding forget set) for each forget set.
### Citation
If you use EasyDUB in your work, please cite:
```bibtex
@misc{rinberg2026easydataunlearningbench,
title={Easy Data Unlearning Bench},
author={Roy Rinberg and Pol Puigdemont and Martin Pawelczyk and Volkan Cevher},
year={2026},
eprint={2602.16400},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2602.16400},
}
```
EasyDUB builds on the KLOM metric introduced in:
```bibtex
@misc{georgiev2024attributetodeletemachineunlearningdatamodel,
title = {Attribute-to-Delete: Machine Unlearning via Datamodel Matching},
author = {Kristian Georgiev and Roy Rinberg and Sung Min Park and Shivam Garg and Andrew Ilyas and Aleksander Madry and Seth Neel},
year = {2024},
eprint = {2410.23232},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2410.23232},
}
```
|