license: mit
tags:
- machine-unlearning
- cifar10
- computer-vision
- robustness
- benchmarks
EasyDUB Dataset
Easy Data Unlearning Bench
Precomputed CIFAR-10 data for KLOM (KL-divergence of Margins) evaluation of data-unlearning methods.
This dataset contains:
- 200 pretrain models: ResNet9 models trained on the full CIFAR-10 training set (50,000 samples).
- 200 oracle models per forget set: ResNet9 models retrained on the retain set (train minus forget) for each of 10 forget sets.
- Logits and margins: Precomputed logits and margins for all models on train/val/forget/retain splits.
All models are checkpointed at epoch 23 (out of 24 total training epochs).
Directory structure
The on-disk layout is:
EasyDUB-dataset/
├── models/
│ └── cifar10/
│ ├── pretrain/
│ │ └── resnet9/
│ │ └── id_X_epoch_23.pt # 200 models (X = 0–199)
│ └── oracle/
│ └── forget_Z/
│ └── resnet9/
│ └── id_X_epoch_23.pt # 200 models per forget set
│
├── logits/
│ └── cifar10/
│ ├── pretrain/
│ │ ├── retain/
│ │ │ └── resnet9/
│ │ │ └── id_X_epoch_23.npy # Full train set logits
│ │ ├── val/
│ │ │ └── resnet9/
│ │ │ └── id_X_epoch_23.npy # Validation logits
│ │ └── forget_Z/
│ │ └── resnet9/
│ │ └── id_X_epoch_23.npy # Forget-set logits
│ └── oracle/
│ └── forget_Z/
│ ├── retain/
│ │ └── resnet9/
│ │ └── id_X_epoch_23.npy # Retain logits
│ ├── forget/
│ │ └── resnet9/
│ │ └── id_X_epoch_23.npy # Forget logits
│ └── val/
│ └── resnet9/
│ └── id_X_epoch_23.npy # Validation logits
│
├── margins/
│ └── cifar10/
│ └── [same structure as logits/]
│
└── forget_sets/
└── cifar10/
└── forget_set_Z.npy # Indices into CIFAR-10 train set
File naming
- Models:
id_{MODEL_ID}_epoch_{EPOCH}.pt(e.g.id_42_epoch_23.pt) - Logits / margins:
id_{MODEL_ID}_epoch_{EPOCH}.npy - Forget sets:
forget_set_{SET_ID}.npy(e.g.forget_set_1.npy)
Shapes and dtypes
- Logits:
(n_samples, 10)NumPy arrays offloat32— raw model outputs for the 10 CIFAR-10 classes. - Margins:
(n_samples,)NumPy arrays offloat32— scalar margins (see formula below). - Forget sets:
(n_forget_samples,)NumPy arrays of integer indices into the CIFAR-10 training set in[0, 49_999].
Typical sizes:
- Train set: 50_000 samples
- Validation set: 10_000 samples
- Forget sets: 10–1000 samples (varies by set)
Margin definition
For each sample with logits logits and true label true_label:
import torch
def compute_margin(logits: torch.Tensor, true_label: int) -> torch.Tensor:
logit_other = logits.clone()
logit_other[true_label] = -torch.inf
return logits[true_label] - logit_other.logsumexp(dim=-1)
Higher margins indicate higher confidence in the correct class relative to all others (via log-sum-exp).
Forget sets
The dataset includes 10 CIFAR-10 forget sets:
- Forget set 1: 10 random samples
- Forget set 2: 100 random samples
- Forget set 3: 1_000 random samples
- Forget set 4: 10 samples with highest projection onto the 1st principal component
- Forget set 5: 100 samples with highest projection onto the 1st principal component
- Forget set 6: 250 samples with highest + 250 with lowest projection onto the 1st principal component
- Forget set 7: 10 samples with highest projection onto the 2nd principal component
- Forget set 8: 100 samples with highest projection onto the 2nd principal component
- Forget set 9: 250 samples with highest + 250 with lowest projection onto the 2nd principal component
- Forget set 10: 100 samples closest in CLIP image space to a reference cassowary image
Each forget_set_Z.npy is a 1D array of training indices.
Quick start
The companion EasyDUB-code repository provides utilities and unlearning methods on top of this dataset.
Here is a minimal example using only NumPy and PyTorch:
import numpy as np
import torch
root = "EasyDUB-dataset"
# Load margins for a single pretrain model on the validation set
margins = np.load(f"{root}/margins/cifar10/pretrain/val/resnet9/id_0_epoch_23.npy")
# Load oracle margins for the same model index and forget set (example: forget_set_1)
oracle_margins = np.load(
f"{root}/margins/cifar10/oracle/forget_1/val/resnet9/id_0_epoch_23.npy"
)
print(margins.shape, oracle_margins.shape)
For a higher-level end-to-end demo (including unlearning methods and KLOM computation), see the EasyDUB-code GitHub repository. In particular, strong_test.py in EasyDUB-code runs a reproducible noisy-SGD unlearning experiment comparing:
KLOM(pretrain, oracle)KLOM(noisy_descent, oracle)
Training procedure (summary)
All pretrain and oracle models share the same training setup:
- Optimizer: SGD with momentum
- Learning rate: 0.4 (triangular schedule peaking at epoch 5)
- Momentum: 0.9
- Weight decay: 5e-4
- Epochs: 24 total, checkpoint used here is epoch 23
- Mixed precision: enabled (FP16)
- Label smoothing: 0.0
Pretrain models are trained on the full CIFAR-10 training set. Oracle models are trained on the retain set (training set minus the corresponding forget set) for each forget set.
Citation
If you use EasyDUB in your work, please cite:
@misc{rinberg2026easydataunlearningbench,
title={Easy Data Unlearning Bench},
author={Roy Rinberg and Pol Puigdemont and Martin Pawelczyk and Volkan Cevher},
year={2026},
eprint={2602.16400},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2602.16400},
}
EasyDUB builds on the KLOM metric introduced in:
@misc{georgiev2024attributetodeletemachineunlearningdatamodel,
title = {Attribute-to-Delete: Machine Unlearning via Datamodel Matching},
author = {Kristian Georgiev and Roy Rinberg and Sung Min Park and Shivam Garg and Andrew Ilyas and Aleksander Madry and Seth Neel},
year = {2024},
eprint = {2410.23232},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2410.23232},
}